report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The use of our criteria to evaluate approaches to Social Security reform highlights the trade-offs that exist between efforts to achieve solvency for the OASDI trust funds and efforts to maintain adequate retirement income for current and future beneficiaries. The models illustrate some of the options and trade-offs that will need to be considered as the nation debates how to reform Social Security. Our analysis of sustainable solvency under Model 2 showed that As estimated by the actuaries, Model 2, with either universal (Model 2–- 100%) or zero (Model 2—0%) participation in voluntary individual accounts, is solvent over the 75-year projection period, and the ratio of annual income to benefit payments at the end of the simulation period is increasing. However, in Model 2 –100% over three decades of general revenue transfers are needed to achieve trust fund solvency. Model 2—0% achieves solvency with no general revenue transfers. Model 2-100% would ultimately reduce the budgetary pressures of Social Security on the unified budget relative to baseline extended. However, this would not begin until the middle of this century. Relative to both GAO’s benefit reduction benchmark and tax increase benchmark, unified surpluses would be lower and unified deficits higher throughout the simulation period under Model 2-100%. Model 2-0% would reduce budgetary pressures due to Social Security beginning around 2015 relative to baseline extended. This fiscal outlook under Model 2-0% is very similar to the fiscal outlook under GAO’s benefit reduction benchmark. Under Model 2-100%, the government’s cash requirement (as a share of GDP) to fund the individual accounts and the reduced defined benefit would be about 20 percent higher initially than under both the baseline extended and tax increase benchmarks. This differential gradually narrows until the 2030s, after which less cash would be required under model 2-100%. By 2075, Model 2-100% would require about 40 percent less cash than the baseline extended and tax increase benchmarks. Viewed from the perspective of the economy, total payments (Social Security defined benefits plus benefit from individual accounts) as a share of GDP would gradually fall under Model 2-100% relative to the baseline extended and tax increase benchmarks. In 2075, the share of the economy absorbed by payments to retirees from the Social Security system as a whole under Model 2-100% would be roughly 20 percent lower than the baseline extended or tax increase benchmark and roughly the same as under the benefit reduction benchmark. With regard to national saving, Model 2 increases net national saving on a first order basis primarily due to the proposed benefit reductions. The individual account provision does not increase national saving on a first order basis; the redirection of the payroll taxes to finance the individual accounts reduces government saving by the same amount that the individual accounts increase private saving. Beyond these first order effects, the actual net effect of a proposal on national saving is difficult to estimate due to uncertainties in predicting changes in future spending and revenue policies of the government as well as changes in the saving behavior of private households and individuals. For example, the lower surpluses and higher deficits that result from redirecting payroll taxes to individual accounts could lead to changes in federal fiscal policy that would increase national saving. However, households may respond by reducing their other saving in response to the creation of individual accounts. Model 3 results are presented in Appendix I. Because the benefit reductions in Model 3 are smaller than in Model 2, long-term unified deficits are larger under Model 3. Model 3 requires an additional contribution equal to 1 percent of taxable payroll for those choosing individual accounts. Assuming universal account participation in both models, Model 3 would result in a larger share of the economy being absorbed by total benefit payments to retirees—about the same share as would be the case under the baseline extended and tax increase benchmarks. The Commission’s proposals also illustrate the difficulty reform proposals face generally in balancing adequacy (level and certainty of benefits) and equity (rates of return on individual contributions) considerations. Each of the models protects benefits for current and near-term retirees and the shift to advance funding could improve intergenerational equity. However, under each of the models, some future retirees also could face potentially significant benefit reductions in comparison to either the tax increase or the benefit reduction benchmarks because primary insurance amount (PIA) formula factors that are reduced by real wage growth, uncertainty in rates of return earned on accounts, changes in benefit status over time, and annuity pricing. Our analysis of Model 2 shows that: Median monthly benefits (the Social Security defined benefit plus the benefit from the individual account) for those choosing individual accounts are always higher, despite a benefit offset, than for those who do not choose the account, and this gap grows over time. In addition, median monthly benefits under universal participation in the accounts are also higher than the median benefits received under the benefit reduction benchmark. However, median monthly benefits received by those without accounts fall below those provided by the benefit reduction benchmark over time. For the lowest quintile of beneficiaries, median monthly benefits with universal participation in the accounts tend to be higher than the benefits received under the benefit reduction benchmark, likely due to the enhanced benefit for full-time “minimum wage” workers. This pattern becomes more pronounced over time. Regardless of whether an account is chosen, under Model 2 many people could receive monthly benefits that are higher than the benefit reduction benchmark. However, a minority could fare worse. Some people could also receive a benefit greater than under the tax increase benchmark although a majority could fare worse. Monthly benefits for those choosing individual accounts will be sensitive to the actual rates of return earned by those accounts. The cohort results for Model 3 are generally similar to Model 2. However, median monthly benefits for those choosing individual accounts are higher than the benefit level under the tax increase benchmark for the 1970 and 1985 cohorts. This result is likely because of Model 3’s feature of a mandatory extra 1 percent contribution into the individual accounts for those who choose to participate. Further results on Model 3 can be found in Appendix I. Each of the models would establish a governing board to administer the individual accounts, including the choice of available funds and providing financial information to individuals. While the Commission had the benefit of prior thinking on these issues, many implementation issues remain, particularly in ensuring the transparency of the new system and educating the public to avoid any gaps in expectations. For example, an education program would be necessary to explain the changes in the benefit structure, model features like the benefit offset and how accounts would be split in the event of divorce. Education and investor information is also important as the system expands and increases the range of investment selection. Questions about the harmonization of such features with state laws regarding divorce and annuities also remain an issue. The use of our criteria to evaluate approaches to Social Security reform highlights the trade-offs that exist between efforts to achieve sustainable solvency and to maintain adequate retirement income for current and future beneficiaries. These trade-offs can be described as differences in the extent and nature of the risks for individuals and the nation as a whole. For example, under certain individual account approaches, including those developed by the Commission, some financial risk is shifted to individuals and households to the extent that individual account income is expected to provide a major source of income in retirement. At the same time, the defined benefit under the current Social Security system is also uncertain. The primary risk is that a significant funding gap exists between currently scheduled and funded benefits which, although it will not occur for a number of years, is significant and will grow over time. Other risks stem from uncertainty in, for example, future levels of productivity growth, real wage growth, and demographics. Congress has revised Social Security many times in the past, and future Congresses could decide to revise benefits in ways that leave those affected little time to adjust. As Congress deliberates approaches to Social Security, the national debate also needs to include discussion of the various types of risk implicit in each approach and in the timing of reform. Public education and information will be key to implementing any changes in Social Security and especially so if individuals must make choices that affect their future benefits. Since the Commission options were published, there has been limited explanatory debate. As Congress and the President consider actions to be taken, it will be important as well to consider how such actions can be clearly communicated to and understood by the American people. Finally, any Social Security reform proposal must also be looked at in the context of the nation’s overall long-range fiscal imbalances. As our long- term budget simulations show, difficult choices will be required of policymakers to reconcile a large and growing gap between projected revenues and spending resulting primarily from known demographic trends and rising health care costs. We provided SSA an opportunity to comment on the draft report. The agency provided us with written comments, which appear in Appendix II. SSA acknowledged the comprehensiveness of our analysis of the Commission’s proposals. The agency also concurs with our reform criterion of achieving sustainable solvency and with our report’s overall observations and conclusions. SSA’s comments and suggestions can be grouped into a few general categories. GAO Benchmarks and Their Relationship to Sustainable Solvency - The agency commends our use of multiple benchmarks with which to compare alternative proposals. However, they note that our definition of sustainable solvency differs from that used by SSA in assessing trust fund financial status. In addition, although they note that our benchmarks are solvent over the 75-year projection period commonly used by SSA’s Office of the Chief Actuary in its preparation of the annual trustees report, they do not achieve sustainable solvency. SSA expresses a concern that unless carefully annotated, the comparisons made in our report could be misunderstood. Finally, SSA also suggests the use of several alternative benchmarks, of which one would provide additional revenue to pay for currently scheduled benefits. We agree with SSA that sustainable solvency is an important objective; indeed it is one of our key criteria with which we suggest that policymakers evaluate alternative reforms. SSA correctly points out that GAO’s benchmarks do not achieve sustainable solvency beyond the 75- year period. We believe our standard is a more encompassing one. SSA’s definition relies on analyzing trends in annual balances and trust fund ratios near the end of the simulation period. Consequently the definition needs to be supplemented, for example, in cases where proposals use general revenue transfers or other unspecified sources of revenue that automatically rise and fall to maintain annual balance or a certain trust fund ratio. In addition, SSA’s definition does not directly consider the resources needed to fund individual accounts. Our standard includes other measures in an effort to gain a more complete perspective of a proposal’s likely effects on the program, the federal budget, and the economy. We share SSA’s emphasis on the importance of careful and complete annotation. The report explicitly addresses the issue of sustainable solvency and states that the comparison benchmarks used, while solvent over the 75-year projection period, are not solvent beyond that period. Given SSA’s concerns, we have revised our report to clarify our analyses, where appropriate, to minimize the potential for misinterpretation or misunderstanding. Regarding SSA’s suggestion about the use of alternative benchmarks, we already use a benchmark that provides additional revenue to pay currently scheduled benefits. Our other benchmark maintains current tax rates, phasing in benefit reductions over a 20-year period. In our view, the set of benchmarks used provide a fair and objective measuring stick with which to compare alternative proposals, particularly the many proposals that introduce reform elements over a number of years. Both of the benchmarks are explicitly fully funded and in their design we worked closely with Social Security’s Office of the Chief Actuary to calibrate them to ensure their solvency over the 75-year period. Additional Analysis – Many of SSA’s comments suggest additional or more detailed analyses of some of our findings. For example, SSA suggested additional analyses of the characteristics of those beneficiaries who fare better or worse under each of the Commission’s models, further distributional analyses on groups of beneficiaries who claim benefits at ages other than 65 and that we conduct analyses on rates of participation other than the polar cases of 0 percent and 100 percent individual account participation. The agency also suggested that substantial analysis on implementation and administration issues is necessary, given the complexity of administering the commission’s models. Although we tried to address most of the critical issues given our limited time and resources, we agree with SSA that many of their suggested analyses could provide additional useful insights in understanding the distributional implications of adopting the Commission’s proposals. Distributional Analysis - SSA expressed a number of concerns about the SSASIM-GEMINI simulation model that we use to conduct our distributional analysis of benefits. One concern addresses future cohorts’ benefit levels reported in our draft. In this regard, we were already reviewing the level of benefits received by the 1985 cohort and the highest quintile of that cohort with outside experts, and our subsequent analysis suggests findings that are more consistent with SSA’s observations: we have made these changes to the report. Some of SSA’s concerns also appear to result from confusion over the structure, design and limitations of the SSASIM-GEMINI model. We have included some additional documentation in the report that we believe will help both the layperson as well as a more technical audience understand the model more easily. We note that while ancillary benefits can be calculated through the model and are included in our analysis, we utilize the model to focus on the individual beneficiary and not the household as the unit of analysis. The model also includes marriage and divorce rates and their implication for earnings. These marriage and divorce rates and other key parameters are expressed by probability rules that drive the lifetime dynamics of the synthetic population. These rules are not heuristically generated but are validated through a comparison with data from the Social Security Administration and the Current Population Survey, among others. We also note that in certain of instances, for example in specifying the calculation of annuities as well as the specification of rates of return used in the modeling, we consulted with SSA’s Office of the Chief Actuary in an effort to reflect their projection methodology to extent that it was feasible. Measures of Debt - SSA notes that unfunded obligations may be considered a kind of implicit debt and should be considered in the analysis. In analyzing reform plans, however, the key fiscal and economic point is the ability of the government and society to afford the commitments when they come due. Our analysis addresses this key point by looking at the level and trends over 75 years in deficits, cash needs, and GDP consumed by the program. Technical Comments – SSA also provided technical and other clarifying comments about the minimum benefit provision, our characterization of stochastic simulation as well as other minor aspects of the report, which we incorporated as appropriate. We are sending copies of this report to the Honorable Larry E. Craig, Ranking Minority Member, Senate Special Committee on Aging, Senator Max S. Baucus, Chairman, Senate Committee on Finance, Senator Charles E. Grassley, Ranking Minority Member, Senate Committee on Finance, the Honorable William M. Thomas, Chairman, and the Honorable Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means, the Honorable E. Clay Shaw, Chairman, and the Honorable Bob Matsui, Ranking Minority Member, Subcommittee on Social Security, House Committee on Ways and Means, and the Honorable Jo Ann B. Barnhart, Commissioner, Social Security Administration. We will also make copies available to others on request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, on (202) 512-7215, or Susan Irving, Director, Strategic Issues, on (202) 512-9142. Evaluation of reform models put forward by the President’s Commission to Strengthen Social Security. The evaluation uses the three basic criteria GAO has developed that provide policymakers with a framework for assessing reform plans: – Balancing Adequacy and Equity in the Benefits Structure – Implementing and Administering Reforms Comprehensive proposals can be evaluated against three basic criteria. Reform proposals should be evaluated as packages that strike a balance among individual reform elements and important interactive effects. Some proposals will fare better or worse than other proposals under each criterion. Overall evaluation of each proposal depends on the weight individual policymakers place on each criterion. No changes to benefits for retirees or near-retirees. Dedication of entire Social Security surplus to Social Security. No increase in Social Security payroll taxes. No government investment of Social Security funds in the stock market. Preservation of disability and survivor components. Inclusion of individually controlled voluntary individual retirement accounts. The Commission developed three reform models, each of which represents a different approach to including a voluntary individual account component in Social Security. Model 1 does not change the defined benefit and does not restore solvency; Models 2 and 3 restore solvency through a combination of changes in the initial benefit calculation, general revenue transfers, and/or benefit offsets for those who choose to participate in the individual account option. Account contribution amounts, benefit offset in exchange for account participation, and calculation of an individual’s initial benefit differ among the three models. All models share a common framework for administering accounts. Voluntary individual accounts in exchange for reduction in Social Security defined portion of benefit. This benefit offset is linked to account contributions, not actual account balance. Governing Board to administer individual accounts structured along Thrift Savings Plan (TSP) or Federal Reserve Board model. – Initially, balance must be invested through TSP-like system with several fund choices; later, if balance is above a threshold, account may be invested in a range of qualified private sector funds. – Annual option to change allocation. – Account balance acquired during the marriage divided equally at divorce. – Balances acquired before marriage not shared at time of divorce. Or alternative arrangement, agreed to by both spouses, consistent with the principle that total benefit income will be sufficient to keep both spouses above the poverty line in retirement. – GAO’s long-term economic model was used to help assess the potential fiscal and economic impacts of Social Security reform proposals. – Estimates of reform models’ costs and income are those made by the Office of the Chief Actuary, Social Security Administration. – The GEMINI model, a dynamic microsimulation model,1 was used to analyze the 1955, 1970, and 1985 birth cohorts to enable comparison of results over time as reform models are fully implemented. – Qualitative analysis based on GAO’s issued and ongoing body of work on Social Security reform was used. GEMINI is useful for analyzing the lifetime implications of Social Security policies for a large sample of people born in the same year and can simulate different reform features, including individual accounts with an offset, for their effects on the level and distribution of benefits. GEMINI was used to analyze Models 2 and 3 both with 0 and 100 percent participation in the individual account features of the proposals. GAO’s analysis uses three benchmarks: Benefit reduction maintains current payroll tax rates and assumes a gradual reduction in Social Security benefits beginning with those reaching age 62 in 2005 and continuing for the next 30 years. Tax increase1 assumes that the combined employer-employee payroll tax rate is increased by 0.34 percent for DI and 1.56 percent for OASI beginning in 2002 in order to pay scheduled benefits. Baseline extended is a fiscal policy path that assumes payment in full of all scheduled Social Security benefits throughout the 75-year period and no other changes in current policies. In this analysis, it uses the 2001 Trustees intermediate economic assumptions, consistent with SSA scoring of reform models. The benefit reduction and tax increase benchmarks were developed by GAO with technical input from SSA’s Office of the Chief Actuary. Both use the 2001 Trustees intermediate economic assumptions. Both restore 75-year actuarial balance to Social Security, but are not solvent beyond this period. All three benchmarks are used in analyzing sustainable solvency. From the perspective of sustainable solvency, the baseline extended differs from the tax increase benchmark. The tax increase benchmark assumes payroll tax financing of all scheduled benefits whereas the baseline extended benchmark assumes all scheduled benefits will be paid but does not specify any new financing. There is no difference between the tax increase and baseline extended benchmarks in analyzing benefit levels, since only the financing of benefits differs, not the actual benefit levels. Therefore only the benefit reduction and tax increase benchmarks are used in analyzing benefit adequacy. Benchmarks are to be viewed as illustrative, polar cases or bounds for changes within the current system. Other benchmarks could be devised with different tax and/or benefit adjustments that would perform the same function. Briefing focuses on Model 2, with results for Model 3 presented in The Commission’s models include a voluntary individual account option. In our analysis we looked at the two bounds of possible outcomes—universal participation (100%) in the account option, or no participation (0%). In analyzing benefit levels, we refer to these outcomes as “with” and “without” accounts. $1,000 annually in exchange for benefit reduction.1 For all those age 62 in 2009 or younger, defined benefits reduced from currently scheduled by indexing initial benefit to prices rather than wages. Enhanced spousal survival benefit beginning in 2009. – Increase in widow(er) benefit up to 75 percent of combined spousal benefit (up to average benefit levels). A new enhanced benefit for full-time “minimum-wage” workers who work more – Accelerated growth in initial benefits from 2009 to 2018. – By 2018, a minimum wage worker with 30 years of program coverage would receive an inflation-indexed benefit equal to 120 percent of poverty level. To the extent that there is participation in individual accounts, financing through general revenue transfers will be needed. If participation were universal, transfers would be needed for about three decades. Maximum contribution amount indexed annually to wage growth. Benefit reduction based on amount of account contributions compounded at a real interest rate of 2 percent. The minimum wage is the current Fair Labor Standards Act minimum of $5.15 an hour but is assumed to grow with the Social Security average wage index. While achieving solvency for the OASDI Trust Funds is important, the concept of sustainable solvency goes beyond 75-year actuarial balance. Sustainable solvency includes reforming the Social Security program in such a way as to avoid the need to periodically revisit actuarial imbalances of the OASDI Trust Funds. For example, a rising or level trust fund ratio at the end of the 75-year period can be an indicator of future program solvency. However, trust fund ratios can give an incomplete picture. They do not provide information about the effect of program spending on the federal budget or the economy. In addition, trust fund ratios can be affected by timing of tax and benefit adjustments and use of general revenues. Sustainable solvency also includes assessing the effects of proposed program changes on the federal budget and on the economy. Reforms that reduce pressures on the federal budget and reduce the size of the economy that will be absorbed in the future by the Social Security system can lead to sustainable solvency. Security system? Increase national saving? Restore 75-year actuarial balance and create a stable system? Raise payroll taxes, draw on general revenues, and/or use Social Security trust fund surpluses to finance changes? Create contingent liabilities? Include “safety valves” to control future program growth? Figure 1 Compared to the baseline extended, Model 2 with universal participation (Model 2 - 100%) in the individual accounts (IA) option results in larger unified deficits as a share of GDP through the 2040s, thereafter unified deficits are progressively lower. Model 2 with no participation (Model 2 - 0%) in IAs results in higher unified surpluses and lower unified deficits beginning around 2015 through the end of the simulation period compared to baseline extended. Greater participation in IAs results in lower surpluses/higher deficits over the simulation period. Throughout the simulation period, unified surpluses are considerably lower and unified deficits are considerably higher under Model 2-100% than under the tax increase benchmark and, to a lesser extent, the benefit reduction benchmark. Through the 2060s, the fiscal outlook under Model 2-0% is quite similar to the outlook under the benefit reduction benchmark but compared to the tax increase benchmark, unified surpluses are lower and unified deficits are higher over the same time frame. Figure 2 Compared to the baseline extended, net debt held by the public as a share of GDP is higher under Model 2-100% until about 2060; thereafter, debt held by the public is lower. Under Model 2-0%, net debt held by the public is lower beginning about 2020 through the end of the simulation period. Greater participation in the IAs results in higher net debt held by the public throughout the simulation period. Throughout the simulation period, net debt held by the public under Model 2-100% is considerably higher than the tax increase benchmark and, to a lesser extent, the benefit reduction benchmark. Net debt held by the public under Model 2-0% is slightly higher than under the benefit reduction benchmark and much higher than under the tax increase benchmark until the end of the simulation period. Under Model 2, national saving would increase primarily due to the improved fiscal position of the government resulting from the proposed benefit reductions. The redirection of payroll taxes under the IA option, would increase private saving and decrease government saving with no net effect on national saving.1 Model 2 restores 75-year actuarial balance with either no participation or universal participation in the IA option. The trust fund ratio would be rising at the end of the 75-year period under both Model 2-0% and Model 2- 100%. Model 2-0% requires no additional revenue. IAs are financed as a redirection of payroll taxes. General revenue transfers would be used to keep the OASDI trust funds solvent under Model 2-100%. Model 2 does not create any new contingent liabilities. Individuals bear the risk of IA investment performance. Model 2 contains no new “safety valves” to control future program growth. Analysis limited to first order effects on saving. Effects on saving behavior in response to specific reform provisions are not considered given the lack of expert consensus. This criterion evaluates the balance struck between the twin goals of income adequacy (level and certainty of benefits) and individual equity (rates of return on individual contributions). To what extent does the proposal: Change scheduled benefits for current and future retirees? Maintain benefits for low-income workers who are most reliant on Social Security? Maintain benefits for the disabled, dependents, and survivors? Ensure that those who contribute receive benefits? Provide higher replacement rates for lower income earners? Expand individual choice and control over program contributions? Increase returns on investment? Improve intergenerational equity? – Median monthly benefits for the 1955, 1970 and 1985 birth cohorts. – Median monthly benefits by benefit quintile. – Distribution of benefits within each cohort. Model 2 Maintains current benefit structure for current and near retirees. Reduces OASDI defined benefits for new retirees, survivors, dependents, and disabled workers starting in 2009 by altering the benefit formula. – Slows growth in benefits by reducing PIA formula factors by real wage growth. This essentially increases benefits levels across generations according to price growth (absolute terms) rather than wage growth (relative terms). – For those who participate in the individual accounts, there is a further offset based on the hypothetical account accumulation, where contributions accrue at a real rate of 2 percent. Increases benefits for certain widow(er)s and low-income earners. PIA formula factor reductions and the benefit offset disproportionately decreases replacement rates. However, minimum benefit guarantees increase replacement rates for workers who qualify. Therefore, overall progressivity of the system is unclear given these provisions and the uncertainty of market returns, the magnitude of participation, and the characteristics of future participants. Overview of Model 2 Cohort Results Across cohorts, median monthly benefits are higher than the benefit reduction benchmark for persons who participate in an individual account (see Figure 6). However, benefit levels received without accounts fall below the benefit reduction benchmark over time. This is due to the timing and structure of the benefit reductions under both the without accounts scenario and the benefit reduction benchmark (see Figure 6). The gap in benefits between the without accounts scenario and the tax increase benchmark and with accounts scenario grows across cohorts (see Figure 6). Median monthly benefits for the 1955 and 1970 cohorts are maintained above the benefit reduction benchmark for the lowest quintile regardless of participation in individual accounts, likely due to the enhanced benefit for full-time “minimum wage” workers (see Figure 7). However, participation in the individual accounts may provide a benefit level even higher than the enhanced benefit for the lowest quintile since, over time, fewer workers will receive this enhanced benefit as wages are assumed to outpace inflation in the future. Comparing median monthly benefits across cohorts in the lowest and highest quintiles indicates that the enhanced benefit for full- time “minimum wage” workers and individual accounts maintain benefits above the tax increase benchmark only for those in the lowest quintile in the 1955 and 1985 birth cohorts (see Figures 7 and 8). – A number of people with accounts fare better than the tax increase benchmark and this number increases (19 to 40 percent) across cohorts (see Figure 11). – A minority of people without accounts fare better than the tax increase benchmark and this minority declines over time (9 to 1 percent). substantially for older cohorts, the effects of varying the real rate of return by plus or minus 1 percent increases over time. – Compared to the benefit reduction benchmark, the 1955 cohort has a ±2% change in its distribution from a ±1% change in the real rate of return, whereas the 1985 cohort has about a ±11% change in its distribution (see Figure 12). – Compared to the tax increase benchmark, the 1955 cohort has approximately a ±3% change in its distribution from a ±1% change in the real rate of return, whereas the 1985 cohort has about a ±15% change in its distribution (see Figure 13). Median monthly benefits are maintained above the benefit reduction benchmark for the lowest quintile regardless of participation in individual accounts, likely due to the enhanced benefit for full-time “minimum wage” workers (see Figure 14). – This enhanced benefit could apply to low-earning disabled workers who work most of their career prior to becoming disabled. In our sample the average age of disability onset is 55. Participation in the individual accounts is also important for disabled workers, especially those in the later cohorts and in the upper quintiles (see Figures 14 and 15). However, the earlier a disabled worker becomes disabled the fewer years they contribute to their account and the smaller is their account balance. Since disabled workers do not have access to their accounts until conversion to retired worker benefits at the normal retirement age (NRA), benefit levels before conversion would be in line with benefit levels for those without individual accounts. Benefit levels for disabled workers may be higher than those of retired workers since disabled workers are entitled to benefits at earlier ages, thus the reductions in their PIA factors would be smaller. This may create an incentive for older workers to apply for disability benefits. Funding for the transition from a pay-as-you-go system to a partially funded system would be handled by transfers from the General Fund of the Treasury and could be repaid when the trust funds experience cash flow surpluses. An education program will be necessary to explain the changes in the benefit structure and to avoid expectation gaps. – Benefit offset feature, financing structure of the system may be difficult to explain, which increases the importance of an education program. An education program will also be necessary to inform OASDI eligible workers on making sound investment decisions regarding diversification, risk, and participation. The Commission did not explicitly address the costs of an education program. It is unclear how the Commission’s proposed account splitting at divorce would fit into divorce law. The proposal establishes a Governing Board to administer the individual accounts, which is intended to limit the potential for politically motivated investing. The board’s duties include the choice of available funds and providing financial information to individuals. The design of the voluntary individual account feature places an additional administrative burden on the SSA. Specifically, the hypothetical account, benefit offset, inheritance feature, and account splitting at divorce would create additional responsibilities for SSA. There is not enough information to estimate administrative costs. Such costs are affected by the level of participation in the individual accounts. However, the Commission believes that individual accounts can be administered at a low cost since they envision the system being structured similar to the TSP. There is not enough information to address how annuities and annuity pricing will be handled; therefore, we used the same assumptions as the SSA Actuaries and did not quantitatively analyze their effect on benefit levels. Long-term simulations provide illustrations--not precise forecasts--of the relative fiscal and economic outcomes associated with alternative policy paths. Long-term simulations are useful for comparing the potential outcomes of alternative policies within a common economic framework over the long term. – Recognizing the inherent uncertainties of long-term simulations, we have generally chosen conservative assumptions, such as holding interest rates and total factor productivity growth constant. Variations in these assumptions generally would not affect the relative outcomes of alternative policies. – The model simulates the interrelationships between the budget and the economy over the long term and does not reflect their interaction during short-term business cycles. Long-term simulations are not predictions of what will happen in the future. In reality, policymakers likely would take action before the occurrence of the negative out-year fiscal and economic consequences reflected in some simulated fiscal policy paths. Reform proposal cost and income estimates are from SSA’s Office of the Chief Actuary. – For each proposal, the OASDI cost estimate reflects all proposed reforms affecting benefits. These include changes in the index used to adjust initial benefit levels, benefit reductions meant to offset individual accounts, and other proposed changes. – For each proposal, the OASDI income estimate reflects such elements as transfers from the general fund to the trust funds and amounts redirected from the payroll tax used to establish individual accounts. Model inputs Social Security spending (OASDI) Medicare spending (HI and SMI) Nonfederal saving (percent of GDP): gross saving of the private sector and state and local government sector Net foreign investment (percent of GDP) Assumptions 2001 Social Security Trustees’ intermediate projections 2001 Medicare Trustees’ intermediate assumption that per enrollee Medicare spending grows with GDP per capita plus 1 percentage point CBO’s July 2002 long-term assumption that per enrollee Medicaid spending grows with GDP per capita plus 1 percentage point CBO’s August 2002 baseline through 2012; thereafter increases at the rate of economic growth (i.e., remains constant as a share of GDP) CBO’s August 2002 baseline through 2012, adjusted for the 2001 Social Security Trustees’ inflation assumptions; thereafter increases at the rate of economic growth CBO’s August 2002 baseline through 2012; thereafter remains constant at 20.5 percent of GDP (CBO’s projection in 2012) Increases gradually over the first 10 years to 17.5 percent of GDP (the average nonfederal saving rate from 1992-2001) Inflation (GDP price index and CPI) Interest rate (average on the national debt) payroll tax once and immediately by the amount of the OASDI actuarial deficit as a percent of payroll so that benefits received under the current system can continue to be paid throughout the projection period. This spreads the tax burden evenly across generations. This can also be accomplished by general revenue transfers. For our analysis, we assumed that this would be implemented as a tax increase to maintain the relationship between contributions and benefits. – The benefit reduction (maintain taxes) benchmark reduces the formula factors by equal percentage point reductions (by 0.319 each year for 30 years) for those newly eligible in 2005, subjecting earnings across all segments of the PIA formula to the same reduction. It is expected that Model 3 should, on average, provide higher initial benefits than model 2 when compared to the benchmarks due to the required additional 1% contributions to the individual accounts for those who choose to participate. For additional information regarding the benchmarks, see U.S. General Accounting Office, Social Security: Program’s Role in Helping Ensure Income Adequacy, GAO-02-62 (Washington, D.C.: Nov. 30, 2001). households split account distributions, our results may over/understate some individuals’ benefit levels. Since we were interested in the effect that reform has on certain birth cohorts, we chose to focus on individuals because household composition can vary across birth cohorts. Analysis was performed using microsimulation with stochastic elements, which included uncertain asset returns, inflation, wage growth, etc. These variables varied across time and individuals. The nominal mean rates of return used in the model for the individual accounts are 6.3% for Treasuries, 6.8% for corporate bonds, and 10% for equities. These assumptions are consistent with those used in the SSA Actuaries’ scoring. All individuals are assumed to annuitize their entire account balance at retirement by purchasing a fixed annuity. Our procedure for annuitization is consistent with that utilized by the SSA Actuaries. Each individual in each of the cohorts retires at age 65. This can have implications for model 3 results since model 3 modifies the actuarial reduction and increment factors. Since access to accounts for disabled workers occurs at the NRA, benefit levels for all beneficiaries are reported at age 67. Voluntary individual accounts in exchange for benefit reduction. – An additional contribution equal to 1 percent of an individual’s taxable payroll is required to participate, with partial subsidy for lower wage workers as a refundable tax credit. – Account contribution equal to 2.5 percent of payroll tax up to an annual maximum of $1,0001 redirected from payroll tax. – At retirement, reduction to defined benefit based on the amount of account contributions (not including the additional 1 percent contribution) compounded at a real interest rate of 2 ½ percent. Maximum contribution amount indexed annually to wage growth. Changes to defined benefits beginning in 2009: – Initial benefit reduced from currently scheduled by indexing to expected gains in life expectancy. – Initial benefits for upper income earners reduced: from 2009- 2028, third highest bend point factor gradually reduced from 15 to 10 percent. – Initial benefits reduced for those who retire early and increased for those who delay retirement. A new enhanced benefit for full-time minimum wage1 workers with more than 20 years of work – Accelerated growth in initial benefits from 2009-2018. – By 2018, a minimum wage worker with 30 years of program coverage would receive a benefit equal to 100 percent of poverty level; thereafter, benefits would be expected to increase about 0.5 percent per year faster than growth in the CPI and the poverty level. – Increase in widow(er) benefit up to 75 percent of combined spousal benefit (up to average benefit levels). Additional financing from permanent dedicated revenue sources and general revenue transfers. The minimum wage is the current Fair Labor Standards Act minimum of $5.15 an hour but is assumed to be grow with the Social Security average wage index. Figure A-1 Compared to baseline extended, as a share of GDP unified surpluses are smaller and unified deficits are larger under Model 3-100% until the 2050s; thereafter, unified deficits are smaller. Under Model 3-0%, beginning around 2015, projected unified surpluses are higher and projected unified deficits are lower than under baseline extended throughout the simulation period. Greater participation in IAs results in lower surpluses/higher deficits over the simulation period. Throughout the simulation period, unified surpluses are considerably lower and unified deficits are considerably higher under Model 3-100% than under the tax increase benchmark and, to a lesser extent, the benefit reduction benchmark. Under Model 3-0%, unified surpluses are lower and unified deficits are higher than under the tax increase benchmark throughout the simulation period and beginning around 2010, unified surpluses are lower and unified deficits are higher than under the benefit reduction benchmark. Figure A-2 Compared to baseline extended, net debt held by the public as a share of GDP is higher under Model 3-100% until about 2060; thereafter, debt held by the public is lower. Under Model 3-0% net debt held by the public would be reduced compared to the baseline extended beginning about 2015 through the end of the simulation period. Greater participation in the IAs results in higher net debt held by the public throughout the simulation period. Throughout the simulation period, net debt held by the public under Model 3-100% is considerably higher than the tax increase benchmark and the benefit reduction benchmark. Net debt held by the public under Model 3-0% is higher than under the benefit reduction benchmark and much higher than under the tax increase benchmark through the end of the simulation period. The government’s cash requirement includes the amount of cash required to pay defined benefits and redirect payroll taxes to individual accounts. Under Model 3-100%, the government’s cash requirement would be greater than both the baseline extended and tax increase benchmarks in the near term—by more than 15 percent in 2010. Beginning in the 2030s, less cash would be required for Model 3-100% than the baseline extended and tax increase benchmarks. In 2075, Model 3-100% would require about 30 percent less cash than the baseline extended and tax increase benchmarks. The cash requirement for Model 3-0% would be slightly greater than both the baseline extended and tax increase benchmarks until after 2010. Thereafter, Model 3-0% would require less cash than both the baseline extended and tax increase benchmarks; in 2075, about 25 percent less cash would be required for Model 3-0%. The government’s cash requirement for Model 3 would be greater than the benefit reduction benchmark for most of the simulation. In 2075, Model 3-0% would require about the same amount of cash as the benefit reduction benchmark and Model 3-100% would require over 5 percent less cash than the benefit reduction benchmark. In the near term, the greater the individual account participation, the more cash required. Over the long term, however, greater individual account participation would reduce the government’s cash requirements. In 2015, total benefit payments (Social Security benefits plus individual account disbursements) as a share of GDP under Model 3 would be slightly (1 percent) lower than under the baseline extended or tax increase benchmark and about 3 percent higher than the benefit reduction benchmark. In 2030, total benefit payments as a share of GDP under model 3-100% would be nearly 4 percent lower than under the baseline extended or tax increase benchmark but 8 percent higher compared to the benefit reduction benchmark. Under Model 3-0%, benefit payments would be about 7 percent lower than the baseline extended or tax increase benchmark but nearly 5 percent higher than the benefit reduction benchmark. In 2075, total benefit payments as a share of GDP under Model 3-100% would be the same as under the baseline extended or tax increase benchmark and nearly one-third higher than under the benefit reduction benchmark. By 2075, the difference in total benefit payments between Model 3-100% and Model 3-0% becomes pronounced with payments under Model 3-0% about the same as the benefit reduction benchmark but only three-fourths the level as under Model 3- 100% or the baseline extended or tax increase benchmark. National saving would increase primarily due to the improved fiscal position of the government resulting from the proposed benefit reductions. The redirection of payroll taxes under the IA option would increase private saving and decrease government saving with no net effect on national saving. The required 1 percent additional contribution would result in an increase in personal saving, although the progressive subsidy would reduce government saving and reduce any net increase in national saving. Restores 75-year actuarial balance with either no participation or universal participation in the IA option. Trust fund ratio at the end of the 75-year period is declining by about 3 percent a year under Model 3-0% but rising under Model 3-100% by about 8 percent a year. Requires new dedicated revenue from an unspecified source. The IAs are financed as a redirect of payroll taxes. In addition to the new dedicated revenue, Model 3-100% requires general revenue transfers to keep the OASDI trust fund solvent. Does not create any new contingent liability. Individual bears risk of personal account investment performance. Indexing initial benefits to increases in life expectancy and updating the indexation every 10 years to reflect actual increases could help guard against unanticipated growth in lifetime benefits. Model 3 Maintain current benefit structure for current and near retirees. Reduces OASDI defined benefits for new retirees, survivors, dependents, and disabled workers starting in 2009. – Benefits are reduced due to indexing initial benefit calculations to longevity rather than wages. – Gradually reduces the third PIA formula factor. – For those who participate in the individual accounts, there is a further offset based on the hypothetical account accumulation , where contributions accrue at a real rate of 2.5 percent. – Increases the actuarial reduction for early retirement. Increases benefits for certain beneficiaries: some widow(er)s, low-income earners, and increases the delayed retirement credit starting in 2010. PIA formula factor reductions and the benefit offset disproportionately decreases replacement rates. However, minimum benefit guarantees increase replacement rates for workers who qualify. Therefore, overall progressivity of the system is unclear given these provisions and the uncertainty of market returns, the magnitude of participation, and the characteristics of future participants. Overview of Model 3 Cohort Results Across cohorts, median monthly benefits are higher than the benefit reduction benchmark regardless of participation in individual accounts (see Figure A-5). The gap in benefits between the without accounts scenario and the tax increase benchmark and with accounts scenario grows across cohorts (see Figure A-5). Median monthly benefits are maintained above the benefit reduction benchmark for the lowest quintile regardless of participation in individual accounts, likely due to the enhanced benefit for full-time “minimum wage” workers (see Figure A-6). However, participation in the individual accounts may provide a benefit level even higher than the enhanced benefit for the lowest quintile since, over time, fewer workers will receive this enhanced benefit as wages are assumed to outpace inflation in the future (see figure A-6). Comparing median monthly benefits across cohorts in the lowest and highest quintiles indicates that the enhanced benefit for full-time “minimum wage” workers and individual accounts maintain benefits above the tax increase benchmark only for those in the lowest quintile and the later cohorts in the highest quintile (see Figures A-6 and A-7). The risk of participating decreases across cohorts when comparing scenarios with accounts and without accounts, primarily because of the lengthening of the investment horizon. For example, 86 percent of the 1955 cohort would gain by choosing an individual account, as did 93 and 95 percent of the 1970 and 1985 cohorts (see Figure A-8). Of those who gained, the median gain was $50 per month in 2001 dollars for the 1955 cohort, while the median loss was about $4 per month among those who did not gain. For the 1970 and 1985 cohorts, the median gain was $223 and $540 per month in 2001 dollars, while the median loss was $25 and $51, respectively. Regardless of whether an account is chosen, a number of people fare better when compared to the benefit reduction benchmark. This is primarily because the benchmark’s PIA formula reductions are initially deeper than Model 3 PIA reductions and the additional 1% contribution (see Figure A-9). A majority of persons with accounts fare better than the benefit reduction benchmark and this majority ranges from 95 to 99 percent across cohorts. In contrast, the number of people without accounts who fare better than the benefit reduction benchmark ranges from 93 to 97 percent across cohorts. A minority of persons (1 to 5 percent) with accounts fare worse than the benefit reduction benchmark, as do 3 to 7 percent of persons without individual accounts (see figure A-9). – Except for the 1955 cohort, a majority of people with accounts fare better than the tax increase benchmark and this number increases (41 to 67 percent) across cohorts (see Figure A-10). – A minority of people without accounts fare better than the tax increase benchmark and this minority declines (9 to 1 percent) over time (see Figure A-10). Although varying the rates of return does not alter the findings considerably for older cohorts, the effects of varying the real rate of return by plus or minus 1 percent increases over time. The increased volatility is likely due to the additional 1% contribution. – Compared to the benefit reduction benchmark, the 1955 cohort has about a ±1% change in its distribution, whereas the 1985 cohort has about a ±2% change in its distribution (see figure A-11). – Compared to the tax increase benchmark, the 1955 cohort has approximately a ±5% change in its distribution, whereas the 1985 cohort has approximately a ±14% change in its distribution (see Figure A-12). Median monthly benefits are maintained above the benefit reduction benchmark for the lowest quintile regardless of participation in individual accounts likely due to the enhanced benefit for full-time “minimum wage” workers (see Figure A-13). – This enhanced benefit could apply to low-earning disabled workers who work most of their career prior to becoming disabled. In our sample the average age of disability onset is 55. Participation in the individual accounts is also important for disabled workers, especially those in the later cohorts and in the upper quintiles. However, the earlier a disabled worker becomes disabled the fewer years they contribute to their account and the smaller is their account balance (see Figures A-13 and A-14). Since disabled workers do not have access to their accounts until conversion to retired worker benefits at the NRA, benefit levels before conversion would be in line with benefit levels for those without individual accounts. Benefit levels for disabled workers may be higher than those of retired workers since disabled workers are entitled to benefits at earlier ages, thus the reductions in their PIA factors would be smaller. This may create an incentive for older workers to apply for disability benefits. Provide workers some investment choice and control, subject to certain limitations. This might enable individuals to earn a higher rate of return on their contributions with an increased measure of risk, primarily that the return expected may not be realized. May improve intergenerational equity through the move to advanced funding of Social Security and the inheritance feature of individual accounts. Make determining the rate of return difficult as the link between contributions and benefits becomes unclear due to general revenue transfers. Thus, we did not quantitatively assess the equity effects of the models. Funding for the transition from a pay-as-you-go system to a partially funded system would be handled by transfers from the General Fund of the Treasury and could be repaid when the trust funds experience cash flow surpluses. An education program will be necessary to explain the changes in the benefit structure and to avoid expectation gaps. – Benefit offset feature, financing structure of the system may be difficult to explain, which increases the importance of an education program. An education program will also be necessary to inform OASDI eligible workers on making sound investment decisions regarding diversification, risk, and participation. The Commission did not explicitly address the costs of an education program. It is unclear how the Commission’s proposed account splitting at divorce would fit into divorce law. The proposal establishes a Governing Board to administer the individual accounts, which is intended to limit the potential for politically motivated investing. The board’s duties include the choice of available funds and providing financial information to individuals. The design of the voluntary individual account feature places an additional administrative burden on the SSA. Specifically, the hypothetical account, benefit offset, inheritance feature, and account splitting at divorce would create additional responsibilities for SSA. There is not enough information to estimate administrative costs. Such costs are affected by the level of participation in the individual accounts. However, the Commission believes that individual accounts can be administered at a low cost since they envision the system being structured similar to the TSP. There is not enough information to address how annuities and annuity pricing will be handled; therefore, we used the same assumptions as the SSA Actuaries and did not quantitatively analyze their effect on benefit levels. | Social Security is an important social insurance program affecting virtually every American family. It represents a foundation of the nation's retirement income system and provides millions of Americans with disability insurance and survivors' benefits. Over the long term, as the baby boom generation retires, Social Security's financing shortfall presents a major solvency and sustainability challenge. Numerous reform proposals have been put forward in recent years, and in December 2001 a commission appointed by the President presented three possible reform models. Senator Breaux, Chairman of the Senate Special Committee on Aging, asked GAO to use its analytic framework to evaluate the Commission's models. This framework consists of three criteria: (1) the extent to which a proposal achieves sustainable solvency and how it would affect the economy and the federal budget; (2) the balance struck between the twin goals of income adequacy and individual equity; and (3) how readily such changes could be implemented, administered, and explained to the public. Applying GAO's criteria to the Commission models highlights key options and trade-offs between efforts to achieve sustainable solvency and maintain adequate retirement income for current and future beneficiaries. For example, the Commission's Model 2 proposal reduces Social Security's defined benefit from currently scheduled levels through various formula changes, provides enhanced benefits for low-wage workers and spousal survivors, and adds a voluntary individual account option in exchange for a benefit reduction. Model 2 would provide for sustainable solvency and reduce the shares of the federal budget and the economy devoted to Social Security compared to currently scheduled benefits (tax increase benchmark) regardless of how many individuals selected accounts. However, with universal account participation, general revenue funding would be needed for about 3 decades. GAO's analysis of benefit adequacy and equity issues relating to Model 2 found that (1) across cohorts, median monthly benefits for those choosing accounts are always higher, despite a benefit offset, than for those who do not; this gap grows over time. In addition, benefits assuming universal account participation are higher than payment of a defined benefit generally corresponding to an amount payable from future Social Security trust fund revenues (benefit reduction benchmark). However, benefits received by those without accounts fall below the benchmark over time. (2) for the lowest quintile, median monthly benefits with universal participation in the accounts tend to be higher than GAO's benefit reduction benchmark, likely due to the enhanced benefit for full-time "minimum wage" workers. This pattern becomes more pronounced across the cohorts analyzed. (3) regardless of whether an account is chosen, many people could receive monthly benefits under Model 2 that are higher than the benefit reduction benchmark. However, a minority could fare worse. Some people could also receive a benefit greater than under the tax increase benchmark although a majority could fare worse. Benefits for those choosing individual accounts will be sensitive to the actual rates of return earned by those accounts. Adding individual accounts would require new administrative structures, adding complexity and cost. Public education will be key to help beneficiaries make sound decisions about account participation, investment diversification, and risk. Finally, any Social Security reform proposal must also be looked at in the context of both the program and the long-term budget outlook. A funding gap exists between promised and funded Social Security benefits which, although it will not occur for a number of years, is significant and will grow over time. In addition, GAO's long-term budget simulations show, difficult choices will be required to reconcile a large and growing gap between projected revenues and spending resulting primarily from known demographic trends and rising health care costs. |
The idea that communication services should be available “so far as possible, to all the people of the United States” has been a goal of telecommunications regulation since Congress enacted the Communications Act of 1934. In particular, although Lifeline was created in the mid-1980s to promote wireline telephone subscribership among low-income households, Congress codified the nation’s commitment to universal service and made significant changes to universal service policy through the Telecommunications Act of 1996 (1996 Act). The 1996 Act provided explicit statutory support for federal universal service policy and directed FCC to establish a Federal-State Joint Board on Universal Service to make recommendations to FCC on implementing universal service provisions of the 1996 Act. The 1996 Act also described universal service as an evolving level of telecommunications services the FCC should periodically review, taking into account advances in telecommunications and information technologies and services. To participate in Lifeline, households must either have an income that is at or below 135 percent of the Federal Poverty Guidelines or participate in one of several qualifying assistance programs. The qualifying programs include Medicaid; SNAP; Supplemental Security Income (SSI); Federal Public Housing Assistance (Section 8); Veterans Pension and Survivors Benefit; or tribal programs for those living on federally recognized tribal lands. Residents of tribal lands may be eligible through additional tribal programs. Since the passage of the 1996 Act, FCC has taken actions aimed at increasing participation in Lifeline. For example, initially to be a Lifeline provider, a telecommunications carrier had to use its own facilities or a combination of its own facilities and resale of another carrier’s service. However, in 2005, FCC granted one carrier forbearance from that requirement. Then, in 2008, FCC approved that carrier, a non-facilities- based, wireless provider, for the limited purpose of providing Lifeline service, which paved the way for other non-facilities-based wireless carriers to offer wireless service. After this approval, participation in Lifeline began to increase significantly. From mid-2008 to mid-2012, Lifeline enrollment increased from 6.8 million households to 18.1 million households, a 166 percent increase. In addition, annual disbursements increased from $820 million in 2008 to $2.2 billion in 2012, a 167 percent increase. In November 1998, FCC changed the universal service structure in response to legal concerns raised by GAO about FCC’s authority to create two independent corporations and Congress’s directive that a single entity administer universal service support. FCC appointed an existing body, USAC, as the permanent administrator of the program and directed the Schools and Library Corporation and the Rural Health Care Corporation to merge with USAC by January 1, 1999. Prior to appointing USAC as the administrator of all universal service programs, FCC prepared and submitted a report to Congress in response to congressional conference committee directions, proposing that USAC would serve in this capacity. While Lifeline participation and disbursements increased rapidly from fiscal year 2008 through mid-2012, both disbursements and participation declined after FCC began implementing the 2012 Reform Order in mid- 2012. As mentioned earlier, FCC adopted the 2012 Reform Order to strengthen internal controls, improve accountability, and explore the inclusion of broadband in the program through a pilot program. To reduce the number of ineligible subscribers in the program, the 2012 Reform Order adopted measures to check subscribers’ initial and ongoing eligibility for Lifeline. The 2012 Reform Order required the creation of NLAD and required Lifeline providers to query this enrollment database to prevent duplicative enrollment. From a 2012 peak of approximately 18.1 million Lifeline participants and $2.2 billion in disbursements, FCC reported that disbursements fell by nearly $40 million in 1 month after the eligibility verification requirements went into effect in June 2012. In the 4th quarter of calendar year 2016, Lifeline participation declined to approximately 12.3 million households, while disbursements declined to approximately $1.5 billion for the year. Figure 1 below shows Lifeline disbursements and participation from 2008 to 2016. The 1996 Act requires every telecommunications carrier providing interstate and international telecommunications services to contribute to federal universal service, unless exempted by FCC. According to the act, these contributions, or fees, are to be equitable and nondiscriminatory and are to be deposited into the USF. For calendar year 2014, approximately 3,100 of 6,820 telecommunications providers that filed their revenues paid USF fees. The amount of contributions required from telecommunications carriers are determined each quarter, when FCC calculates the contribution factor based on the projected demands of the universal service programs and the projected contribution base. USAC then bills contributors based on this factor. As shown in figure 2, the USF contribution factor has increased 217 percent (approximately 12 percentage points) since 2000. In the 1st quarter of calendar year 2016, the USF contribution factor was 18.2 percent, but as of the 4th quarter had dropped slightly to 17.4 percent. According to FCC’s 2012 Further Notice of Proposed Rulemaking regarding the assessment and recovery of USF contributions, an impetus for the increased USF contribution factor is the decrease in assessable revenues. For example, competition in the interstate long-distance market, growth of wireless service, and bundling of service packages has led to decreases in assessable revenues. As the pool of contributors and assessable revenues has declined over the years, the USF contribution requirements for those remaining contributors has increased to cover the costs of administering the universal service programs. Carriers file projected revenue information on a quarterly basis, which is used to calculate the contribution factor for the forthcoming quarter, and carriers are then billed for contributions by USAC based on the quarterly contribution factor. Carriers generally pass their USF fee obligation on to their customers, typically in the form of a line item on their monthly telephone bill. Carriers, thus are able to recover the cost of their contributions to USAC on a monthly or quarterly basis using the money collected from customers. USAC uses USF contributions to pay for the universal service programs, including Lifeline. Lifeline providers currently receive a subsidy of $9.25 for every nontribal Lifeline customer that the Lifeline provider claims is enrolled in Lifeline based on the monthly or quarterly forms they submit to USAC. While the federal nontribal Lifeline subsidy amount per beneficiary is consistent across all Lifeline providers, the services provided to the Lifeline subscriber may vary depending on the state where the beneficiary lives and service offerings of the Lifeline provider, as some states supplement the federal Lifeline subsidy with state funds. According to FCC officials, approximately 23 states currently offer additional funding for subscribers. For example, Lifeline providers in California receive $13.75 per month in addition to the $9.25 federal subsidy. As a result, some California Lifeline providers are able to provide subscribers with unlimited voice minutes and unlimited text messages, while subscribers receiving service from the same Lifeline provider in another state are eligible for up to 350 free minutes and unlimited text messages. In its 2016 Modernization Order, the commission addressed this variation to some extent by adopting minimum service standards for both voice and broadband services, to be implemented in a phased-in approach, which became effective in December 2016. See figure 3 for how USF money typically flows to support universal service programs, including Lifeline. Pursuant to advice provided by the Office of Management and Budget (OMB) in April 2000, FCC maintains USF funds outside of the U.S. Treasury. The private bank that holds the USF provides banking services for USAC, which includes annual investment management services with fees of approximately $1.5 million per year as of December 2015. Funds collected in excess of USAC’s immediate requirements for cash on hand for all universal service programs are invested in U.S. Treasury securities. According to the most-recent financial reports, as of September 2016, the USF account had approximately $9 billion in assets, and, as of December 2015, Lifeline had approximately $80 million in assets. As we described in previous work, the USF is a permanent indefinite appropriation. While the Antideficiency Act applies to appropriated funds, since 2004, Congress has exempted the USF from the Antideficiency Act. The current exemption extends until December 31, 2017. FCC, USAC, and states, as well as Lifeline providers and their agents, all have roles and responsibilities in Lifeline. At the federal level, FCC is responsible for setting policy, making and interpreting rules, providing oversight, and, in certain states, designating carriers as Eligible Telecommunications Carriers (ETC)—which are companies eligible to receive universal support funding, including Lifeline, and generally referred to in this report as Lifeline providers. USAC manages the daily operations of Lifeline, including collecting USF fees, disbursing payments, auditing USF recipients and contributors, and reporting to FCC. At the state level, public-utility commissions can increase the scope of Lifeline in their states by contributing additional financial support to Lifeline recipients. States can also play a role in Lifeline enrollment either by accepting applicants directly or giving Lifeline providers access to information on enrollment in programs that households use to qualify for Lifeline for the purposes of verifying eligibility, since this information is generally housed at the state level. To receive Lifeline disbursements, carriers must be designated as ETCs by state public-utility commissions or FCC. State public-utility commissions have the primary responsibility for designating carriers as ETCs; however, in a situation where the telecommunications carrier is not subject to jurisdiction of a state commission, FCC may designate the carrier as an ETC. ETCs participating as Lifeline providers are generally responsible for verifying applicants’ eligibility for Lifeline, advertising the availability of the program, submitting forms for reimbursement, and making annual eligibility recertifications. As of the fourth quarter of 2016, there were 2,079 ETCs. Figure 4 is a graphical representation of the organizational structure and corresponding responsibilities of the different parties involved in Lifeline program. FCC has called for program evaluations in the past to review the administration of universal service generally, including Lifeline, but has not completed such evaluations. For example, FCC specified that it would review USAC 1 year after USAC was appointed as the permanent administrator to determine whether the universal service programs were being administered effectively. This review was never done. In 2005, FCC awarded a contract to the National Academy of Public Administration to study the administration of the USF programs generally, examine the tradeoffs of continuing with the current structure, and identify ways to improve the oversight and operation of universal service programs. However, FCC officials told us that FCC subsequently terminated the contract and the study was not conducted. In March 2015, we found that FCC had not evaluated Lifeline’s effectiveness in achieving its performance goals of ensuring the availability of voice service for low-income Americans, while minimizing the burden on those who contribute to the USF. Specifically, we reported that, according to FCC officials, FCC had not evaluated the extent to which Lifeline has contributed to the narrowing of the gap in penetration rates (the percentage of households with telephone service) between low- income and non-low-income households, and at what cost. We, therefore, recommended, and FCC agreed, to conduct a program evaluation to determine the extent to which Lifeline is efficiently and effectively reaching its performance goals. Our 2015 report also described the results of two studies that FCC provided to us and that had evaluated the impact of Lifeline. These studies suggested the program may be an inefficient and costly mechanism to increase telephone subscribership. The conclusions of both studies suggested that many low-income households would likely subscribe to telephone service in the absence of the Lifeline subsidy. As we reported in 2015, FCC officials stated that the structure of the program made it difficult for the commission to determine causal connections between the program and the number of individuals with telephone access. In particular, FCC officials noted that because Lifeline has existed since the 1980s, it is difficult to compare results from the program to results in the absence of the program. We also noted in our 2015 report that several factors may alter how many people sign up for Lifeline benefits. For example, changes to income levels and prices have increased the affordability of telephone service, and technological improvements, such as mobility of service, have increased the value of telephone service to households. Our current work raises additional questions about Lifeline’s effectiveness in meeting its program goals. Specifically: Lifeline participation rates are low compared to the percentage of low- income households that pay for phone service. According to FCC, the participation rate shows that millions of Lifeline-eligible households are obtaining voice service without Lifeline. FCC’s most-recent monitoring report estimated that in 2015 approximately 96 percent of low-income households that would be eligible for Lifeline based on income had phone service. However, it appears that the majority of those low-income households are receiving phone service outside of Lifeline. Specifically, USAC reports that there were at least 38.9 million households in the states and District of Columbia that were eligible for Lifeline as of October 2015, and only 12.5 million, or 32 percent, were enrolled in the program. Additionally, FCC does not know how many of the 12.3 million households receiving Lifeline as of December 2016 also have non-Lifeline phone service (for which they pay out of pocket) along with their Lifeline benefit. Without knowing whether participants are using Lifeline as a primary or secondary phone service, it is difficult for FCC to determine whether it is achieving the goal of increasing telephone subscribership among low- income consumers while minimizing the USF contribution burden. FCC revamped Lifeline in March 2016 to focus on broadband adoption and generally phase out phone service, in part because FCC recognized that most eligible consumers have phones without Lifeline and to also close the “digital divide” of broadband adoption between low-income households and the rest of the country. However, broadband adoption rates have steadily increased for the low-income population absent a Lifeline subsidy for broadband. The 2016 Modernization Order cites a June 2015 report from the Pew Research Center to show that there is a “digital divide” as low-income consumers adopt broadband at rates well below the rest of the country. However, that report also notes that the class-related gaps have shrunk dramatically in 15 years, as the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment. More-recent analysis from the Pew Research Center shows that after accounting for mobile data services, the number of individuals without Internet service has dropped from an estimated 48 percent in 2000 to 13 percent as of May 2016. Telecommunications providers began to address the “digital divide” in some capacity prior to the 2016 Modernization Order’s effective date by offering their own low-cost Internet service to low-income households. We found that at least two companies operating in a total of at least 21 states have begun offering in-home non-Lifeline broadband wireline support for less than $10 per month to individuals that participate in public-assistance programs, such as SNAP, TANF, or public housing. The offered rate of these providers’ own low-income broadband service of $10 per month, is less expensive than FCC’s broadband reasonable-comparability cost benchmark of approximately $55 per month, which Lifeline subscribers would be paying for a similar level of service. FCC has recently taken some steps toward evaluating Lifeline. In June 2015, FCC solicited comments from the general public, citing our 2015 recommendation for a program evaluation. Specifically, FCC asked whether it should change or modify the program goals and whether it was necessary to perform a program evaluation, and, if so, how to best conduct such an evaluation for Lifeline. In the 2016 Lifeline Modernization Order, which, among other things, revamped Lifeline to include broadband service in addition to voice service, FCC revised program goals to explicitly include affordability for both services. Also as part of the 2016 order, FCC instructed USAC to hire an outside, independent, third-party evaluator to complete a program evaluation of the Lifeline’s design, function, and administration. The order stipulated the outside evaluator must complete the evaluation and USAC must submit the findings to FCC by December 2020. According to GAO’s Cost Estimating and Assessment Guide, to use public funds effectively, the government must meet the demands of today’s changing world by employing effective management practices and processes, including the measurement of government program performance. Similarly, according to OMB guidance, it is incumbent upon agencies to use resources on programs that have been rigorously evaluated and determined to be effective, and to fix or eliminate those programs that have not demonstrated results. As FCC expects Lifeline enrollment to increase as the program is expanded to include broadband service, this expansion could carry with it increased risks for fraud, waste, and abuse. Although the potential for this risk is acknowledged by FCC in its discussion of a previous expansion of Lifeline, when FCC previously expanded Lifeline it did so without sufficiently adjusting program rules to keep pace with the new technologies, the financial incentives, or the subsequent growth in the program. Similarly, our 2015 report found that when FCC expanded Lifeline to include wireless service without quantifying or estimating the potential cost increases, it contributed to significant increases in disbursements from 2008 to 2012. Therefore, completing the program evaluation as planned, and as we recommended, would help FCC determine whether Lifeline is meeting its stated goals of increasing telephone and broadband subscribership among low-income consumers, while minimizing the burden on those who contribute to the USF. FCC and USAC have established financial controls for Lifeline, including obtaining and reviewing information about billing, collecting, and disbursing funds. They have also developed plans to establish other controls, such as for moving USF funds currently held in a private bank account to the U.S. Treasury and establishing a national eligibility verifier (National Verifier) that Lifeline providers could use to determine the eligibility of applicants seeking Lifeline service. Weaknesses remain, however, including the lack of requirements to effectively control program expenditures, above approved levels, concerns about the transparency of fees on customers’ telephone bills, and a lack of FCC guidance that could result in Lifeline and other providers paying inconsistent USF contributions. USAC has established financial and management controls to obtain and review information to carry out its responsibilities with regard to billing, collection, and disbursement of funds for universal service programs, including Lifeline. To that end, FCC and USAC developed a Service Provider and Billed Identification Number and General Contact Information Form (FCC Form 498) to collect required information, such as service-provider name, study area code (SAC), tax identification number, and contact information from all ETCs, including Lifeline providers. This information serves as a key internal control for billing, collection, and disbursement operations. For example, all carriers participating in Lifeline are required to have a SAC, which is a unique company-specific six-digit number that identifies a carrier in a specific geographic area (e.g., state or territory), and to have a unique FCC Form 98 ID. USAC takes steps to assign a SAC to ensure only valid Lifeline providers, new Lifeline providers, or existing Lifeline providers that are beginning operations in a new geographical area receive disbursements. According to USAC policy, before a SAC is issued, USAC officials review the ETC designation order and confirm with the state public service commission that the order is final and valid. USAC policy states this review is generally accomplished by locating the ETC designation order on the state public service commission websites, but USAC may also contact the public service commission directly with any questions about the order. As part of our undercover work, we tested this authorized payment internal control by submitting fictitious documentation to USAC posing as a Lifeline provider seeking a SAC designation to begin enrolling customers and collecting USF subsidies. The results of this test are illustrative rather than generalizable. USAC appropriately rejected our application and explained it was unable to confirm our ETC designation with the state we claimed to have approved us on our fabricated application. Moreover, USAC also noted that there was no record that FCC approved our fictitious company to provide Lifeline service. Once the SAC and FCC Form 98 ID are established and validated by USAC, Lifeline providers can begin providing services to qualified subscribers and seek reimbursement from USAC. Typically, Lifeline providers file their claims to USAC on a monthly or quarterly basis, but have as long as 1 year from the respective filing period to file a revised claim. Currently, USAC calculates the amount owed to the Lifeline provider based on the providers’ monthly or quarterly claims. USAC enhanced some of its internal controls to help prevent improper or potentially fraudulent payments as a result of potential risks we identified during the course of our work. Specifically, on the basis of our observations of how USAC enters and approves a Lifeline service provider and processes payments, we identified internal control weaknesses whereby a USAC employee could improperly use the system to create fraudulent payments. On the basis of our descriptions, USAC officials agreed that risks existed and indicated they would take steps to mitigate these risks, as described below. Employee creates a fraudulent SAC and generates a disbursement: A policy exists to separate the roles of data entry and review among USAC employees charged with administering Lifeline. However, during our review we found a lack of controls that would separate these two functions and provide oversight of data-entry actions. For example, an employee could create a new SAC and then enter contact information and banking information for the SAC. This action would not create an automatic notification to a reviewer or supervisor. As a result, a lone employee could create a SAC and request a disbursement for the SAC. To enhance controls, USAC officials said that, beginning in August 2015, reimbursement approvers began pulling an independent report from their system for the new SACs receiving disbursements for the first time and comparing it to the supporting ETC-designation documentation obtained from an individual who does not have access to enter new SACs into the system. Employee uses an existing SAC that is not currently receiving disbursements to generate a disbursement: During our review we found that a lone USAC employee could change the banking and contact information associated with a SAC and then act as a reviewer to approve the changes without a separate reviewer being automatically notified. The employee could then request a disbursement for the FCC Form 498 ID and have it deposited into a different bank account. To enhance controls, USAC officials said that, beginning in August 2015, the reimbursement approvers began generating an independent report from the system for SACs that are being paid with a prior FCC Form 497 entry of zero dollars, which occurs when a company has not filed for 6 months and confirms it has no subscribers, and reviewing the FCC Form 497 record to determine whether there was any suspicious activity requiring further validation. In addition, USAC officials told us they would update the user workflows and permissions for employees as part of a development effort that includes revisions to ETC filing procedures. According to USAC officials, the updated workflow requires that new FCC Form 498 ID numbers generated internally will be reviewed and approved by a member of the Finance Management Team. According to USAC officials, these internal user workflow changes were implemented in May 2016. FCC maintains USF funds—whose current net assets exceed $9 billion according to the most recent financial reports (as of September 2016)— outside of the U.S. Treasury pursuant to OMB advice provided in April 2000. OMB had concluded that the USF does not constitute public money subject to the Miscellaneous Receipts Statute, 31 U.S.C. § 3302, a statute that requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. As such, USF balances are held in a private bank account. However, subsequent to this OMB advice, in February 2005 we reported that the FCC should reconsider this determination in light of the status of universal service monies as federal funds. According to an internal memo from FCC’s Managing Director in December 2014, OMB presented the FCC with a Fiscal Year 2016 Budget Passback, a memo outlining various goals and objectives relating to USF reform, modernization, and oversight. The memo states that OMB observed that USF funds are federal resources and should enjoy the same rigorous management practices and regulatory safeguards as other federal programs. According to correspondence received from the FCC Chairman’s Senior Legal Counsel, as of March 2017, FCC has decided to move the funds to the Treasury to address this situation. In addition to addressing any risks associated with having the funds outside the Treasury, FCC identified potential benefits of moving the funds. For example, FCC explained that having the funds in the Treasury could allow USF payments to be used to offset other federal debts, and would provide USAC with better tools for fiscal management of the funds, including access to real-time data and more accurate and transparent data. To accomplish this move, the correspondence notes FCC has been coordinating with the Treasury and OMB to obtain a better understanding of obstacles involved with moving the money to the Treasury. FCC’s Office of the Managing Director prepared a preliminary project plan for moving the USF to the Treasury with the goal of completing the transfer in approximately 1 year. If the USF were held in the Treasury, the Secretary of the Treasury would have more cash on hand, which could reduce the Treasury’s need to borrow cash and its associated borrowing costs. USAC Banking Arrangements The Universal Service Administrative Company (USAC) contract with the bank that holds the Universal Service Fund (USF) includes terms for the compensation owed for services provided by the bank, bank-data retention requirements, and confidentiality agreements. For 2015, USAC paid the bank annual investment fees of approximately $1.5 million. A different bank provides banking services for USAC’s administrative disbursements, such as payroll services, but there is no contractual arrangement between that bank and USAC. Federal Communications Commission (FCC) officials were unaware that USAC did not have a contract in place until we raised the matter with them in April 2015. Since 1999, this bank has managed USAC’s administrative disbursements—totaling approximately $141 million in 2015—for an annual cost of approximately $22,000. According to FCC, fees paid to this bank are funded by credits from the USF, which are 0.2 percent of average collected balances, and there is not a minimum balance requirement, therefore, there are no separate annual fees paid to the bank. Regardless, there is no contract in place stipulating the service agreement, terms and conditions, or associated costs. FCC officials told us they were aware of the banking service, but that not having a contract in place was an oversight on the part of USAC and needs to be remedied. After we raised this issue, USAC solicited competitive proposals in October 2016 for these banking services and plans to put in place a contract to stipulate the agreement. According to FCC, until the USF is moved into the Treasury, there are some oversight risks associated with holding the fund in a private account. Although USF funds are held by a bank in the name “Universal Service Administrative Company as Agent of the FCC for Administration of the FCC’s Universal Service Fund,” the contract governing the account does not provide the FCC with authority to direct bank activities with respect to the funds in the event USAC ceases to be the administrator of the USF. FCC officials told us that although FCC is not party to the bank contract for USF, they reviewed the statement of work for the contract and were involved in USAC’s procurement process. After we raised this matter with FCC officials, beginning in November 2016, FCC sought to amend the contract between USAC and the bank to enable the bank to act on FCC instructions independent of USAC in the event USAC ceases to be the administrator. However, as of May 2017, the amended contract has not been signed. While there is a preliminary plan to move the USF funds to the Treasury, as well as plans to amend the existing contract with the bank as an interim measure, several years have passed since this issue was brought to FCC’s attention without corrective actions being implemented and under FCC’s preliminary plan it would not be until next year, at the earliest, that the funds would be moved to the Treasury. Further, in May 2017, while reviewing a draft of this report, a senior FCC official informed us that FCC experienced some challenges associated with moving the funds to the Treasury, such as coordinating across the various entities involved, which raised some questions as to when and perhaps whether the funds would be moved. Until FCC finalizes and implements its plan and actually moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. Currently, there are no uniform front-end eligibility checks available to USAC to ensure Lifeline providers have accurately tallied the number of subscribers for whom they seek reimbursement. As a result, USAC primarily relies on a “pay-and-chase” model of oversight. “Pay-and-chase” refers to making disbursements on the front end and relying on audits or reviews after the funds have been disbursed to check for any noncompliance or improper payments. According to USAC officials, claims submitted by Lifeline providers are reviewed to help ensure accuracy, and the risks of overpayments are minimized prior to disbursement. However, these reviews are fairly limited. For example, USAC officials told us they compare provider disbursements, perform a trend analysis of disbursement amounts to search for suspicious claims, and initiate additional reviews when a claim appears irregular or exceeds a set rate of increase determined by USAC officials as potentially risky. Additionally, USAC primarily relies on Payment Quality Assurance Program (PQA) and Beneficiary and Contributor Audit Program (BCAP) assessments—discussed later in this report—that occur after disbursements have been made to detect fraud. While USAC’s payment- review processes may help minimize improper payments to some extent, USAC does not confirm subscriber eligibility and therefore is limited in its ability to know up front whether the Lifeline providers’ forms submitted for payment are accurate and based on qualifying households receiving Lifeline service. GAO’s Framework for Managing Fraud Risks in Federal Programs states that, to the extent possible, agencies should conduct data matching to verify key information, including self-reported data and information necessary to determine eligibility, prior to enrollment to avoid the “pay-and-chase” approach to risk management, which is typically a less cost-effective use of resources. To help determine eligibility prior to enrollment, FCC has plans to create a third-party national eligibility verifier (National Verifier) to be launched nationwide by the end of 2019. The National Verifier is expected to interface with both state and federal eligibility databases to confirm eligibility. Currently, USAC and FCC are working to sign data-sharing agreements with state entities and federal agencies with relevant eligibility-data sources. If effectively implemented, the National Verifier— discussed in more detail later in this report—could help ensure eligibility verification and reduce the reliance on a pay-and-chase model of oversight. However, on the basis of past experience, the feasibility of creating data-sharing agreements that would enable an automated means to confirm eligibility prior to disbursements is uncertain. Specifically, the 2012 Reform Order set a goal for developing an automated means for verifying Lifeline eligibility by the end of 2013, for, at a minimum, SNAP, Medicaid, and SSI because these are the three most common programs through which subscribers qualify for Lifeline. FCC has not yet been able to create such an automated means. According to FCC officials, there are challenges in creating a national eligibility database as some states have privacy laws that prohibit sharing eligibility data with the federal government. Moreover, data quality may vary from state to state, or may not be maintained by the state for each Lifeline qualifying program. Until progress is made with the National Verifier and data-sharing agreements are put in place with state eligibility databases, USAC will continue to primarily rely on a pay-and-chase approach to detect fraud. USAC performs USF contribution audits of telecommunications providers as a financial management control. The number of audits issued from January 2010 through December 2015 was limited; however, USAC plans to increase its audit coverage in future years. USAC performs contribution audits to ensure that telecommunications providers pay USF fees as required to support the universal service programs. As previously discussed, all telecommunications providers, with limited exceptions, must pay a percentage of their interstate and international end-user telecommunications revenues to support the USF. Among other things, USAC contribution audits review documentation to verify that the revenue reported by the telecommunications providers match actual revenues. The contribution audits are also meant to confirm, among other things, that telecommunications providers that opt to pass the cost of USF fees to customers do not charge in excess of the relevant contribution factor times the assessable portion of that customer’s bill. From January 2010 through December 2015, USAC had issued contribution audits on 74 telecommunications providers, with audit periods from calendar year 2007 through 2013. During this 6-year period, the total number of telecommunications providers that filed revenues with USAC each year ranged from about 6,000 to almost 6,700. (See table 1.) The limited audit coverage of the reported USF contribution-based revenue during this time frame is primarily the result of USAC not auditing the larger USF contributors. For example, during the period we reviewed, USAC audited 1 of the top 10 USF contributors and 2 of the top 30 USF contributors for calendar year 2014. Of the 74 audits performed, 8 of them were performed on telecommunications providers that reported $0 in assessable revenues. According to USAC officials, the reason the very large companies are not routinely audited is due to the complexity of the audits and limited audit resources. According to USAC’s most recent 2016–2017 fiscal year audit plan, USF contributions audits beginning in March 2016 used a targeted risk-based approach, which includes the amount of assessable revenues and whether the carrier has ever been audited, as opposed to randomly selecting carriers, as had been done previously. Also, the officials said that the percentage of audit coverage is expected to increase with the current audit plan as external cosourced staff from external audit firms were retained in March 2016 to help perform audits of higher-risk and larger contributors. The current audit plan also estimates that approximately 9 percent of the reported gross revenues from telecommunications carriers will be covered in future audit years. If effectively implemented, these changes should result in a significant increase of risk-based audit coverage, and should help USAC better assess compliance with USF contribution requirements for universal service funding. The findings for the 74 USF contributions audits we reviewed indicate that most carriers were not reporting their assessable telecommunications revenue appropriately. These audit findings raise questions not only about the USF fees collected, but also the rate that was set by USAC. Because the assessable telecommunications revenues reported by audited carriers have been incorrect, the audits raise the possibility that the USF rate-setting process was potentially based on inaccurate information. In other words, the accuracy of the USF contribution factor is limited as this calculation is partly based on reported telecommunications revenues, which the limited number of audits demonstrate may be reported incorrectly. Of the 74 contribution audits, USAC found that in 10 the carrier reported revenues correctly; in 48, USAC found the carrier underreported assessable telecommunications revenue; and in 16, USAC found that the carrier overreported assessable telecommunications revenues—and thus may have overcollected USF fees from customers. As part of the contribution audit, USAC also reviews a small sample of customer phone bills to ensure that USF fees charged to customers are not in excess of the relevant contribution factor times the assessable portion of that customer’s bill as required by regulation. For 15 of the 16 USAC contribution audits that found the carrier overreported assessable telecommunications revenue to USAC, the audit noted the carrier was unable to be reimbursed because the 12-month time limit imposed by FCC rule to refile had expired. If the carrier passed through USF fees, as most do, it is likely that the customers were also not reimbursed. In some instances, when USAC audits find that the company overreports assessable revenue, but the limited sample testing of individual customer bills do not indicate an overcharge has occurred, the audits do not recommend or require that the company refund customers any USF fees that were overcollected as a result of the incorrect revenue assessment. The limited audit coverage combined with audit findings demonstrating some carriers have paid into the USF incorrectly may suggest that USF fees collected are not in the correct amount. In our review of the 74 contribution audits, we also found that 60 of them included tests to determine whether the carrier was in compliance with the rules as they relate to USF recovery charges on end-user customer invoices. We found that 27 of the 60 tests identified that the carrier overcollected USF fees from some customers and 1 other could not determine whether the carrier overcharged USF fees as the carrier did not maintain documentation. The total amount of overcollection among these audits was unknown because the findings were based on a small sample of invoices reviewed and not the total population of potentially overcharged customers. According to USAC officials, typically when a telecommunications provider overcharges USF fees, it is not limited to a few customers but affects the entire customer base. For example, if a telecommunications provider charges all of its customers the incorrect USF quarterly- contribution factor, it would likely affect all customers. When USAC finds that USF fees were misapplied to a customer’s phone bill during the contribution audit, USAC instructs the telecommunications provider to comprehensively review all their phone bills to identify the universe of improper USF fee charges and to reimburse those customers. However, USAC told us that it is not responsible for determining how those reimbursements should take place, as that is an FCC policy issue. As a result, USAC does not follow up with telecommunications providers to ensure they comprehensively review their phones bills or reimburse overcharged customers, but instead refers the audits that find USF overcharges to the FCC Enforcement Bureau. Until recently, USAC and the FCC’s Enforcement Bureau had differing views about what constituted a formal referral for enforcement action with respect to USF overcharges. According to USAC, since January 2013 it has submitted lists to FCC identifying telecommunications carriers with potential USF fee overcharges based on completed contribution audits, which included 16 identified carriers. FCC’s Enforcement Bureau officials told us that the lists alone were not considered by them as referrals or recommendations for enforcement action, but rather as general information that may support investigations. According to the Enforcement Bureau officials, the primary referral process used for USF enforcement actions is through letters submitted by USAC, which specifically identify matters to be considered for enforcement action. In contrast, USAC officials told us it was their understanding that the listings of contributor audits that found customers were overcharged USF fees would be considered referrals for follow-up and potential enforcement actions. According to the Enforcement Bureau, 1 of the 16 contributors that was listed is under investigation and 2 others were considered for enforcement action, but, on the basis of available enforcement resources, the age of the alleged overcharges, and the potential severity of the violations, the Enforcement Bureau determined no further action was warranted for these 2 cases. In our review of the 74 contribution audits, we identified an additional 11 companies that overcharged USF fees to customers that were not included in the list of 16 audits that USAC provided to FCC’s Enforcement Bureau, totaling 27 audits that found USF overcharges to customers. USAC officials told us 8 of the 11 instances of overcharging USF fees were not forwarded to FCC because they occurred prior to 2013, which is when FCC and USAC established a policy to forward such audit findings to the FCC. Two of the audits were not on the list because they were approved by the USAC Board of Directors after our request for the list of audits that found USF overcharges. USAC officials confirmed those two audits were later provided to FCC. One audit was not provided, but USAC officials told us they will include that audit in their next report to FCC’s Enforcement Bureau. Thus, there was no audit follow-up or enforcement actions taken for 24 of the 27 audits in which USAC found the carrier was overcharging USF fees to customers during the 2007–2013 audit period time frame, and it is not known whether the carrier comprehensively reviewed phone bills across its customer base to identify all overcharges, or whether overcharged customers were ever reimbursed and whether overcharges stopped. The lack of agreement as to what constitutes a referral to follow up on USF overcharges created some risk that FCC’s Enforcement Bureau would not take action to review and ensure customers are reimbursed and any overcharges stop. However, as a result of our inquiries regarding the status of these referrals, FCC officials told us they initiated a new referral process. According to FCC officials, since December 2015, all FCC referrals are routed to a central point of contact, as opposed to individuals, within FCC’s Enforcement Bureau using a standardized e- mail address. According to FCC officials, this revised process will better ensure that all referrals are reviewed by a central point of contact and routed to the appropriate point of contact for follow-up if necessary. With the March 2016 Modernization Order, FCC established a budget mechanism for Lifeline for the first time, setting the budget at $2.25 billion. According to FCC, it was mindful of concerns that establishing a budget on Lifeline could lead to eligible consumers being denied service. Yet, partly because it decided to expand Lifeline to include broadband, FCC stated that it had concluded that its budget mechanism would ensure the financial stability of the program and help guarantee access to all eligible consumers. It also stated that establishing the budget mechanism would balance the need to ensure that Lifeline continued to reduce the contribution burden on the nations’ ratepayers and continue to support service to eligible consumers. According to the March 2016 order, FCC set the budget at $2.25 billion by considering current participation rates, possible growth of the program as a result of the expansion to broadband, and the safeguards already in place to reduce waste, fraud, and abuse. According to GAO’s Cost Estimating and Assessment Guide, a reasonable and supportable budget is essential to a program’s efficient and timely execution. However, the 2016 Modernization Order does not require the FCC Commissioners to take any immediate action to control expenditures if the budget is exceeded. Instead, the order requires a bureau within FCC to issue a report to FCC Commissioners by July of the following year if total Lifeline disbursements exceeded 90 percent of the budget in the previous calendar year. The order states that the Commissioners are expected to take action in response to the report within a 6-month time frame. No requirements are outlined stipulating that the budget must be reapproved by the Commissioners if additional funds are needed to meet program demands. Thus, if costs were to overrun 90 percent of the budget, it could be a year or longer before the commission could take any actions according to the time frame outlined in the order, raising questions about the timing, efficacy, and ability of the budget to control expenditures. Without requiring the Commissioners to review and approve additional spending in a timely manner, substantial increases in demand like those that the program has experienced in the past could lead to expenditures beyond those that FCC budgeted. In such a case, the budget would have limited effect in controlling program costs. When telecommunication carriers opt to bill customers with a USF line- item charge, a customer may not be able to identify what line item accounts for the USF charge. FCC’s Truth-in-Billing rules apply when providers pass through USF line-item charges to customers. These rules are intended to improve consumers’ understanding of their telephone bills and to help consumers detect and prevent unauthorized charges. While FCC has not adopted any particular language to specify how USF charges are to be labeled on a bill, the rules require that a telephone company’s bill must provide a brief, clear, nonmisleading, plain-language description of the service or services rendered to accompany each charge, and contain full and nonmisleading descriptions of charges, among other things. According to USAC officials, a customer may not be able to identify what line items account for the USF charge. For example, several USAC officials we spoke with were unable to determine what line items accounted for USF pass-through charges when reviewing their own phone bills. Similarly, FCC’s own phone bill did not clearly specify the line item reflecting the USF pass-through charge, but instead referred to “regulatory pass-through charges.” FCC officials were not able to determine whether this line item represented USF charges during our meeting, but they told us they confirmed with their telecommunications provider after the meeting that this line item represented USF charges. According to USAC officials, their contribution audits do not determine whether companies comply with the Truth-in-Billing rules with regard to the labeling of USF fees as this is considered outside the scope of their audits. Instead, according to USAC’s review officials, audits of customers’ bills as part of contribution audits are focused on ensuring carriers do not overcharge USF fees to customers beyond the assessable contribution rate, and this is made possible through detailed meetings with the telecommunications provider that take place during the audit. However, even though FCC has not adopted any particular language to specify how USF charges are to be labeled, USAC could assess whether there is a brief, clear, nonmisleading, plain-language description of the service or services rendered to accompany each charge. Without including in their audit reports instances where they cannot identify the USF charge—for those carriers that opt to pass through USF charges in a separate line item—carriers may lack the impetus to enhance the transparency of their bills, and their customers will remain unable to detect and prevent potentially unauthorized charges. USAC has requested guidance from FCC pertaining to USF contribution requirements, but the guidance is still pending. Specifically, in August, 2009 USAC sought guidance on whether various revenues derived from new technologies require USF fees, including whether Virtual Private Network (VPN) and dedicated Internet Protocol revenue should be classified as a telecommunication service, and thus subject to USF fees. Similarly, in April 2011 USAC submitted a request to FCC for guidance to determine whether text messaging revenue is subject to USF fees. Both of these items remain pending. In April 2012, FCC adopted a Further Notice of Proposed Rulemaking regarding reform of the contributions system. The notice sought public comment on various measures to reform and modernize the USF contribution system, including who should contribute, how contributions should be assessed, improvements to the administration of the contributions system, and recovery of universal service contributions from consumers. This rulemaking remains pending. Additionally, in August 2014 FCC sought a recommendation from the Federal-State Joint Board on Universal Service regarding modifications of the universal service contribution methodology and referred the rulemaking record from the April 2012 notice to the joint board for its consideration. The joint board’s decision also remains pending, but per FCC officials, may address some of the issues on which USAC has requested guidance. FCC is required to ensure that telecommunications carriers that provide interstate telecommunications services pay USF fees, on an equitable and nondiscriminatory basis, to the specific, predictable, and sufficient mechanisms established by FCC to preserve and advance universal service. In addition, according to Standards for Internal Control in the Federal Government, management should internally communicate the necessary quality information to achieve the entity’s objectives. Per FCC regulations, its Wireline Competition Bureau is required to take action in response to requests for reviews of decisions of the USF Administrator within 90 days, with the option to extend the response time an additional 90 days, but there is no requirement regarding the timing of action on requests for guidance from the USF Administrator. FCC officials told us the reasons for the significant delays are varied. For example, FCC officials told us that some guidance requests such as these from USAC are very complicated and require the full commission’s input, which can take a long time as the FCC has other competing priorities. Without guidance on contribution requirements, some carriers collect more from customers and pay more into the fund than other carriers for the same service. For example, our review of the 74 contribution audits found 14 instances whereby a carrier classified texting or VPN revenues, or both, as assessable USF revenues. One audit, issued in March 2011 found the carrier reported $117 million dollars in VPN revenues as telecommunications revenues assessable to USF contributions. According to USAC, because of the carrier’s decision to classify VPN revenue as a telecommunication service, the carrier may have passed through approximately $3.9 million in USF fees to customers. In comparison, another audit found a company that classified $86 million in text revenue as nontelecommunications revenue and therefore not assessable for USF contributions. According to USAC, the carrier reported approximately 88 percent of its mobile services as nonassessable, therefore approximately $1.4 million in USF fees were forgone and not collected from customers to fund universal service programs. By responding to USAC requests for guidance, FCC could help ensure that the contribution factor is based on complete information and that USF pass-through charges are equitable. Although FCC and USAC have implemented controls to improve subscriber eligibility verification, such as implementing the NLAD database in 2014, our analysis of data from 2014, including undercover attempts to obtain Lifeline service, revealed significant weaknesses in subscriber eligibility verification. Subsequently, USAC took steps to enhance the accuracy of the NLAD database. Lifeline providers are generally responsible for verifying the eligibility of potential subscribers, but their ability to do so is hindered by a lack of access to, or awareness of, state eligibility databases. These challenges might be overcome if USAC provided additional information to providers about those databases and if FCC establishes a National Verifier, as it plans to do by 2020, to remove responsibility for verifying eligibility from the providers. USAC has implemented some controls to screen for subscribers attempting to receive duplicate Lifeline benefits, and for applicants attempting to enroll in the program using fictitious identities and addresses, and to verify whether subscribers are still eligible for Lifeline. These controls have reduced the number of subscribers and households receiving duplicate benefits both within the same Lifeline provider and subscribers receiving duplicate benefits across Lifeline providers. Specifically, in 2012, FCC directed USAC to develop NLAD to keep track of all subscribers within Lifeline and to verify that subscribers are not already receiving Lifeline service from a different Lifeline provider. Also in 2012, FCC began requiring the annual recertification of all subscribers’ eligibility. Lifeline providers or, if applicable, state Lifeline administrators are required to recertify that their subscribers are still eligible for Lifeline beginning the calendar year after each subscriber is enrolled. The NLAD database was completely implemented by March 2014 and contains a real-time list of Lifeline beneficiaries to assist carriers in identifying and preventing duplicate subscribers. Prior to NLAD, because Lifeline providers were unable to view each other’s subscriber lists, they could not detect subscribers receiving duplicate benefits across providers. Currently, when Lifeline providers enroll individuals in the program, the NLAD database automatically checks for potentially duplicative benefits within and among Lifeline providers. In addition, since NLAD went online the database has utilized a Third Party Identity Verification (TPIV) process and an address validation control to verify applicants’ identities and addresses when their information is entered into NLAD. The TPIV process verifies the identity of an applicant by matching the applicant’s first name, last name, date of birth, and the last four digits of his or her Social Security number (SSN) against official records. The address validation control process checks applicants’ addresses against U.S. Postal Service data. Applicants who fail TPIV or address validation controls are subject to a dispute resolution process whereby subscribers can provide additional documentation to confirm their identity or documentation confirming their address is deliverable. If NLAD identifies the applicant as a potential duplicate subscriber, or the identity and address cannot be confirmed, the provider will not be able to register the applicant in NLAD. To identify Lifeline subscribers who were potentially ineligible to participate in the program, we tested the eligibility of subscribers who claimed participation in Medicaid, SNAP, and SSI using NLAD data as of November 2014. We focused our analysis on these three programs because FCC reported in 2012 that these are the three qualifying programs through which most subscribers qualify for Lifeline. Because SNAP and Medicaid data are maintained at the state level, we selected five states to test Lifeline beneficiaries’ participation in SNAP and six states to test their participation in Medicaid. We tested SSI eligibility across the 46 states and the District of Columbia whose Lifeline providers utilize NLAD. We compared the approximately 3.4 million subscribers that, according to information entered in NLAD, were eligible for Lifeline due to enrollment in one of these three programs to eligibility data for these programs. Prior to our analysis of NLAD data, we conducted reliability testing including examining the data for anomalies such as last four SSN digits that were all zeroes and out-of-scope or dates of birth based on a comparison to the Lifeline enrollment date. We also tested NLAD for complete duplicate records containing the same subscriber name, last four SSN digits, and date of birth. On the basis of our discussions, documentation review, and our own testing of the data, we concluded that the data fields used for this report were sufficiently reliable for the purpose of our review, but that the potential for significant data-entry errors in NLAD remains. Further, it is not possible to determine from data matching alone whether these matches definitively identify recipients who were not eligible for Lifeline benefits without reviewing the facts and circumstances of each case. For example, we could not identify based on the data alone whether there were data-entry errors at the time of enrollment incorrectly stating the qualifying Lifeline program presented by the subscriber at the time of Lifeline enrollment. On the basis of our analysis of NLAD and public-assistance data, we could not confirm that a substantial portion of selected Lifeline beneficiaries were enrolled in the Medicaid, SNAP, and SSI programs, even though, according to the data, they qualified for Lifeline by stating on their applications that they participated in one of these programs. According to NLAD, the number of subscribers participating in these programs in the states selected for our analysis was 3,474,672, or 33 percent, of the 10,589,244 unique subscribers we identified. In total, we were unable to confirm whether 1,234,929 individuals out of the 3,474,672 that we reviewed, or 36 percent, participated in the qualifying benefit programs they stated on their Lifeline enrollment applications or were recorded as such by Lifeline providers. If providers claimed and received reimbursement for each of these subscribers, then the subsidy amount associated with these individuals equals $11.4 million per month, or $137 million annually, at the current subsidy rate of $9.25 per subscriber. Because Lifeline disbursements are based on providers’ reimbursement claims, not the number of subscribers a provider has in NLAD, our analysis of NLAD data could not confirm actual disbursements associated with these individuals. Given that our review was limited to those enrolled in SNAP or Medicaid in selected case-study states, and SSI in states that participated in NLAD at the time of our analysis, our data results are likely understated compared to the entire population of Lifeline subscribers. These results indicate that potential improper payments have occurred and have gone undetected. We plan to refer potentially ineligible subscribers identified through our analysis to FCC and USAC for appropriate action as warranted. Figure 5 below shows the percentage of Lifeline subscribers (that claimed either Medicaid or SNAP as eligibility to qualify for Lifeline) we were unable to confirm as eligible using state Medicaid and state SNAP eligibility data for selected case-study states. The results of our analysis for Georgia also include the percentage of Lifeline beneficiaries we were unable to confirm as eligible who were validated by the state eligibility database, as Georgia’s state database only confirmed eligibility for Medicaid and SNAP at the time of our analysis. Figure 6 is an interactive graphic that shows the percentage of Lifeline beneficiaries (that claimed eligibility via SSI to qualify for Lifeline) that we were able to confirm as likely eligible and that we were unable to confirm as likely eligible using nationwide SSI eligibility data for states that participate in NLAD. See appendix II for more information. We also conducted analysis of NLAD to identify instances of subscribers receiving duplicate Lifeline benefits and deceased individuals appearing as active beneficiaries. The results of our analysis are as follows: We found a total of 5,510 potential internal duplicates whereby the last name, first name, date of birth, and last four digits of the SSN of one record matched another record exactly. The subsidy amount associated with these duplicates equaled approximately $51,000 per month, or $612,000 annually. We matched NLAD enrollment data with SSA’s Death Master File and identified 6,378 individuals reported as deceased who are receiving Lifeline benefits. These individuals either were enrolled, recertified, or both after they had been reported dead. The date of death for each of these individuals preceded the Lifeline enrollment or recertification date by at least 1 year. The subsidy amount associated with these individuals equaled $58,997 monthly and $707,958 annually. According to USAC, the NLAD recertification date field is not completely populated; therefore, these numbers likely understate the number of people reported dead who were reenrolled in Lifeline. The results of our analysis show that a potential annual subsidy amount of $1.2 million could have resulted from potentially ineligible or fictitious individuals receiving Lifeline benefits if these individuals were not deenrolled by USAC or Lifeline providers and the providers claimed reimbursement for these subscribers. At the time USAC provided the NLAD data to us in November 2014, USAC officials stated that they were performing a number of procedures on the initial data loaded into NLAD by providers. According to USAC officials, from September through December 2014 Lifeline providers were required to collect Independent Economic Household worksheets from all subscribers who were found to share the same address with another Lifeline subscriber. USAC officials informed us that if no such completed worksheet was obtained, or if the subscriber did not certify he or she was part of a different household from another subscriber sharing the same address, the subscriber was deenrolled. USAC reported that this process deenrolled approximately 1.3 million subscribers, some of whom could still have been in the data we reviewed. We did not remove these subscribers when we conducted our data analysis, because duplicate addresses are allowed if individuals are part of a separate economic household. USAC performed additional work to collect Independent Economic Household worksheets before determining whether subscribers should be deenrolled. USAC officials also informed us that additional rigor was added to NLAD’s duplicate-checking algorithm in March 2015. Specifically, USAC officials explained that a process to scrub NLAD records to identify additional duplicates was completed in May 2015, and resulted in the deenrollment of approximately 374,000 subscribers. We estimate that USAC’s work to identify and scrub duplicates was performed on over 10 million subscribers, while our analysis was limited to our case-study states for Medicaid and SNAP and the national population of SSI recipients. As USAC had not completed its process of identifying and deenrolling duplicate subscribers when we obtained NLAD data, there may be some overlap between the subscribers deenrolled by USAC and the 3.4 million subscribers included in our analysis. However, we removed internal duplicates in NLAD whereby the last name, first name, date of birth, and last four digits of the SSN of one record matched another record exactly before performing any data matching, so the likelihood of any overlap in duplicate subscribers has been reduced. Our analysis also involved matching NLAD data to qualifying Lifeline program data, which FCC or USAC have not done. Our undercover testing found that Lifeline may be vulnerable to ineligible subscribers obtaining service and found examples of Lifeline providers being nonresponsive, or providing inaccurate information. To conduct our 21 tests, we contacted 19 separate providers to apply for Lifeline service. We applied using documentation fictitiously stating that we were enrolled in an eligible public-assistance program or met the Lifeline income requirements. We were approved to receive Lifeline services by 12 of the 19 Lifeline providers using fictitious eligibility documentation. The seven Lifeline providers that we did not receive service from did not provide it for different reasons. For example: Two of the seven Lifeline providers informed us that we were denied because they could not verify the identity of the fictitious applicants used for our tests. One Lifeline provider told us that the application was appearing as a duplicate and was not being accepted by NLAD, even though the fictitious identity was not enrolled. One other provider told us that our identity could not be verified and that the address provided on our application, a UPS Store mailbox, was in use by another Lifeline customer. We were told only a certain individual within the company could offer resolution; however, we made multiple calls and left six messages on this individual’s voice mail over a 5 week period and did not receive a call back. The remaining three providers told us that they do not ship to post office boxes. While FCC regulations do not preclude a Lifeline provider from accepting a post office box address for a billing address if different from the subscriber’s residential address, there is no requirement for them to do so. We completed two separate tests using different identities for 2 of the 19 providers due to the outcome of the first test for each provider. Specifically: One of these providers initially deemed us ineligible for Lifeline, but it did so because the representative for that provider erroneously calculated our pay stub income, which if calculated correctly, would have met eligibility requirements. We reapplied using a different identity claiming enrollment in a public-assistance program as support and providing fictitious documentation and were approved for Lifeline. The other provider approved us for the program, but never provided us with service. We were given a customer identification number and phone number, but the provider did not ship us a free phone as advertised as part of their Lifeline service. We called the Lifeline provider 11 times over a period of 2-½- months to inquire about the status of our service. A company representative told us on multiple occasions that our phone had been or would be shipped, only to later say that our phone could not be shipped because the company had run out of phones. We were told on multiple occasions that the phone would ship within 4 days, but we did not receive it from the time we applied in July 2015 through December 2015 and therefore we were unable to begin our Lifeline service. This provider did not provide an alternative to participating in Lifeline, such as using our own mobile device to receive service. We reapplied using a different identity to determine whether this was a recurring issue with this Lifeline provider. When reapplying using a different identity, we were told on separate occasions that our identity could not be validated and to not apply using low income as the eligibility qualifier. We were also told that the applicant’s participation in the public assistance program stated on the application could not be verified. However, an official from the state where we applied stated that the public-assistance program in question was not included in a database of public- assistance programs and beneficiaries made available to Lifeline providers. Further, we experienced instances during our undercover tests where our calls to providers were disconnected, and where Lifeline provider representatives transmitted erroneous information, or were unable to provide assistance on questions about the status of our application. For example, one Lifeline provider told us that our application was not accepted by the company because our signature had eraser marks; however our application had been submitted via an electronic form on the provider’s website and was not physically signed. While our tests are illustrative and not representative of all Lifeline providers or applications submitted, these results suggest that Lifeline providers do not always properly verify eligibility and that applicants may potentially encounter similar difficulties when applying for Lifeline benefits. USAC officials told us that they had improved both NLAD and the TPIV process since they were established. USAC officials told us that they had identified that either Lifeline subscribers or Lifeline providers had exploited a TPIV override process in NLAD, so they established a control to remedy the problem. Specifically, USAC officials stated that in 2015 they had modified the duplicate-checking algorithm to add additional rigor and eliminated the identity override process. Furthermore, as discussed above, USAC officials stated that they scrubbed all NLAD records to identify any additional duplicates that may have occurred prior to these enhancements. This process was completed in May 2015, and resulted in deenrollment of approximately 374,000 subscribers. Additionally, for the data that we examined from when NLAD was launched in March 2014 through November 2014, NLAD subscriber data contained addresses that were associated with multiple subscribers. For example, through our analysis we found a single address was associated with 10,000 separate subscribers, all receiving Lifeline benefits through the same Lifeline provider. This address could not be verified by the U.S. Postal Service address verification system we consulted. One Lifeline provider listed multiple addresses in NLAD with over 500 Lifeline subscribers, which may be reasonable given that some of the addresses appear to be associated with homeless shelters. In total, we identified 48 unique addresses that were each associated with more than 500 subscribers. In December 2016, the provider we found with over 10,000 subscribers associated with the same address was fined $30 million and relinquished FCC and state authorizations to participate in Lifeline; a fraud investigation by FCC and the United States Attorney’s Office found employees fraudulently enrolling duplicate and ineligible subscribers into Lifeline. Officials from USAC also stated that they are examining ways to utilize data analytics to check the quality of data in NLAD. For example, according to USAC officials, they became aware that certain prefixes and area codes are not used for residential phone numbers and they have reviewed NLAD for such information to mitigate fraud. Another example of analytics includes looking for SSN last four digits of “0000,” which is a last-four-digit code never assigned in actual SSNs, and examining subscribers who are over the age of 100. Measures such as these, along with the transition to a National Verifier, as discussed below, should help data quality concerns in the future and mitigate potential fraud. Lifeline has relied primarily on Lifeline providers to verify subscriber eligibility for the majority of subscribers. Providers are to verify subscriber eligibility by reviewing supporting documentation or by checking state eligibility databases that contain information on beneficiaries of Lifeline-qualifying assistance programs, such as SNAP and Medicaid. If the data entered into the eligibility databases are accurate, and Lifeline providers use them as intended, eligibility databases available to Lifeline providers can be an important tool for limiting fraud, waste, and abuse in Lifeline by verifying eligibility. However, not all states have databases that Lifeline providers can use to confirm eligibility. According to FCC, as of June 2016, databases that could be utilized for initial eligibility determinations existed in 29 states. We also found that state databases do not always contain beneficiary information for every Lifeline qualifying program. Table 2 below shows what qualifying programs were available for eligibility checks for our case study states as of June 2016. Some providers with whom we spoke were unaware of databases that were potentially available to them. Officials from two Lifeline providers we spoke with were not aware of all the eligibility databases available for use in areas where they provide Lifeline service. For example, one Lifeline provider we spoke to provided us with information stating that 18 states maintained an eligibility database, while another Lifeline provider that operated in 41 states at the time told us it knew of only 8 states with databases. The provider operating in 41 states was unaware of 10 state eligibility databases in states it operated in that were identified by the other provider. Officials from one of these companies told us they were not aware of a comprehensive list of state eligibility databases. USAC officials confirmed that they do not provide Lifeline providers with a list of state databases that are available to confirm program eligibility. As a result, these Lifeline providers and potentially others are not utilizing required applicant verification tools that are available to them. Further, USAC does not independently verify that subscribers have been vetted through the eligibility databases or otherwise verify subscribers’ eligibility. Lifeline providers are required by program rules to access state eligibility databases, where available, to determine an applicant’s program-based eligibility. In the absence of such a database, a Lifeline provider must review proof of enrollment in a qualifying program or proof of income eligibility. USAC audits of Lifeline providers do check to determine whether an administrator or eligibility database was relied upon. USAC does not, however, confirm that beneficiaries that Lifeline providers report in NLAD as having been vetted through a state database actually were vetted. Theoretically, a Lifeline provider could enter into NLAD that a state database or state administrator was used when it was not. This possibility could partially explain why we could not confirm eligibility for approximately 70 percent of those individuals we reviewed in Georgia that, according to NLAD, were deemed eligible by a state administrator. Officials from Georgia’s SNAP office told us that although the database is available to ETCs, it is possible they are not using the database, and Georgia does not have any way to check to see that the database is being used. As part of their annual recertification requirements, service providers are required to certify that they have procedures in place to review income and program-based eligibility documentation, and confirm eligibility by relying upon access to a state database or eligibility notice from a state Lifeline administrator, prior to enrolling a customer in Lifeline. The recertification form states that “ersons willfully making false statements on this form can be punished by fine or imprisonment under Title 18 of the United States Code, 18 U.S.C. § 1001.” Lifeline providers are required to review supporting documentation, such as a driver’s license or Social Security card, when an applicant’s identity cannot be verified. However, Lifeline providers are not required to provide supporting documentation to USAC as part of the TPIV process; instead, Lifeline providers submit required information stating what documentation was reviewed, and USAC confirms that the type of documentation appropriately verifies the subscriber’s identity, but does not review the documentation itself. As of February 2016, providers are required to retain all documents used to verify a subscriber’ identity. The planned National Verifier will retain documentation collected as a result of the eligibility-determination process, and Lifeline providers will not be required to retain eligibility documentation for subscribers determined to be eligible by the National Verifier once it is implemented. Although state eligibility databases do not exist for all states, and not all eligible programs are included within those state eligibility databases that do exist, knowing which states have program-based eligibility databases is an important first step to allow Lifeline providers to better determine applicant eligibility prior to enrollment. According to Standards for Internal Control in the Federal Government, management should use high-quality information to achieve the entity’s objectives, such as using relevant data from reliable sources. Maintaining and disseminating an up-to-date list of available state eligibility databases that includes the qualifying programs those databases access would help enhance Lifeline providers’ awareness, and potentially use of, these tools. Such a list could also help USAC, working with the states, whenever possible, to determine which Lifeline providers had obtained access to state eligibility databases, and gain greater assurance that providers are fulfilling their responsibility of ensuring only eligible subscribers are enrolled. In March 2016, FCC adopted an order to create a National Verifier that would determine eligibility rather than having the Lifeline providers do so. According to FCC, to take steps to foster a long-term technological solution to Lifeline eligibility and to leverage the program integrity and enrollment procedures provided by assistance programs that capture 80 percent of the Lifeline eligible population, the number of benefit programs applicants may utilize for Lifeline eligibility would be reduced. According to the order, the five qualifying assistance programs that remain permit easy technological solutions to lay the groundwork for a successful National Verifier because they have existing and accessible databases that the National Verifier will be able to use. FCC officials told us that they intend the National Verifier to interface with both state and federal eligibility databases. According to FCC, with the exception of SNAP (which is administered at the state level), all of the eligibility programs have national databases (i.e., SSI, Veterans Pension, and Medicaid). FCC officials told us that they are working with USAC to create the National Verifier. The FCC has set expectations for it to be deployed in phases with at least five states being launched at the end of 2017, an additional 20 states launched in 2018, and the remaining states or territories by the end of 2019. FCC officials told us that USAC was required to submit a comprehensive draft plan for the National Verifier to FCC for review and approval by the end of November 2016. USAC submitted its National Verifier Draft Plan to FCC on November 30, 2016, outlining its proposed approach to designing and building the National Verifier. USAC submitted its first updated version of the plan on January 2017. According to FCC officials, USAC will provide a status update to FCC twice per year throughout the development and implementation of the National Verifier. FCC officials informed us that in January 2017, USAC executed a contract with the vendor for the design of the National Verifier. FCC and USAC identified challenges to establishing the National Verifier. As of January 2017, USAC had identified six initial challenges that could affect the successful launch, build, and operation of the National Verifier, including: (1) unavailability of data sources that can be used for automated eligibility; (2) inadequate operational capacity to effectively manage new processes and high volumes of eligibility verifications; (3) data-breach preparedness; (4) establishment of connections with state or federal data source; (5) emergency preparedness; and (6) designing a system that meets standards. FCC officials further explained that creating a national eligibility database requires coordination with each state, which can be time-consuming and challenging. For example, some states have privacy laws that prohibit sharing eligibility data with the federal government and data quality may vary from state to state. Additional potential concerns include challenges supporting subscribers in tribal areas. USAC has developed mitigation strategies to address several of these concerns, including working with states, vendors, and other stakeholders. According to USAC, progress updates to FCC and the public will continue to be provided every 6 months in updated National Verifier plans. FCC and USAC have established mechanisms to enhance their oversight of Lifeline providers. For example: As implemented in the 2012 Reform Order, Lifeline-only ETCs that do not utilize their own facilities must file a compliance plan with FCC detailing measures they will take to comply with Lifeline regulations as well as additional safeguards against fraud, waste, and abuse. The compliance plans should include information about the carrier and the Lifeline plans it intends to offer, including the names and identifiers used by the carrier, its holding-company, operating company, and all affiliates, and how it will comply with FCC’s rules and requirements. The 2012 Reform Order also required biennial audits of ETCs providing Lifeline service and receiving $5 million or more annually, determined on a holding company basis, from the low-income program. FCC regulations require that licensed certified public accounting firms independent of the carrier conduct these audits in a manner consistent with Generally Accepted Government Auditing Standards. In April 2014, FCC released uniform audit procedures that the accounting firms must use. As outlined in FCC’s audit procedures, these reviews would be conducted as agreed-upon procedures attestations. The first reports included reviews of calendar year 2013 and were submitted in 2015. Due to the nature of these agreed-upon procedures engagements, each biennial audit report must state that an examination of the subject matter was not performed. Therefore, an opinion on the Lifeline provider’s compliance with Lifeline rules cannot be expressed through these procedures. In July 2014, FCC took additional measures to combat fraud, waste, and abuse by creating a strike force to investigate violations of USF program rules and laws. According to FCC, the creation of the strike force is part of the agency’s commitment to stopping fraud, waste, and abuse and policing the integrity of USF programs and funds. In June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. Starting in fiscal year 2016, USAC implemented a risk-based selection method when conducting Beneficiary and Contributor Audit Program (BCAP) audits to identify the entities with the greatest risk. BCAP audits are conducted on each USAC program in accordance with Generally Accepted Government Auditing Standards, with their primary purpose to ensure compliance with FCC rules and program requirements, and to assist in program compliance. USAC officials told us that, before fiscal year 2016, many of the audited entities were randomly selected, and the selection process was designed to provide a wide variety of entities with regard to size and geographic location. See appendix III for more information. Our analysis of FCC and USAC oversight of Lifeline providers found weaknesses in how they oversee providers entering and implementing the program, and enforcing penalties for violations of program rules. FCC has plans or has taken some steps to address some of these weaknesses. In its 2012 Reform Order, FCC described how its review of compliance plans was critical to helping evaluate Lifeline providers’ stated plans to adhere to program rules before providers receive any Lifeline funds. The compliance plan review process requires telecommunications providers to provide specific information regarding their service offerings and the measures they will take to implement the Lifeline provider obligations as well as further safeguards against fraud, waste, and abuse that FCC may deem necessary. However, FCC officials told us that no agency document exists that instructs reviewers how to evaluate compliance plans. Without written instructions with criteria for how to review compliance plans, there is some risk that the compliance plan review process is not applied consistently or effectively, or is not conducted in such a way as to help facilitate Lifeline program goals. As a result, the compliance plan review process is limited in providing some level of oversight prior to disbursing funds. Furthermore, FCC has a backlog of pending compliance plans. In 2012, FCC approved its first 20 compliance plans, and did not approve any additional plans until August 2016. In August 2016, FCC approved two plans from Lifeline providers specifically dedicated to wireline service. According to FCC, the approval of these two compliance plans was necessary to prevent disruption of Lifeline service for affected wireline customers. As of March 2017, 22 compliance plans had been approved, 22 had been denied, and 34 were pending. FCC officials told us that the delay in approving compliance plans was caused by other agency priorities, but were unable to detail what those priorities were. They also added that the number of staff assigned to reviewing compliance plans was limited to four; the staff have also had other assignments and responsibilities; and these factors were among those that led to the number of plans pending without an FCC decision. According to FCC officials, absent statutory time frames specific to the review of compliance plans and ETC petitions, FCC has not established any time frames for approving or denying these documents. The resulting situation limits the expansion of Lifeline service for companies providing and seeking to provide Lifeline service. As with the compliance plans, FCC had a backlog of 35 pending ETC petitions and had approved 7 providers and denied 15 providers as of March 2017. According to federal statute, telecommunication providers must submit a petition and be designated as ETCs before they can receive reimbursement for providing Lifeline service. ETC designations are made by state regulatory commissions or by FCC if state law does not grant a state the authority to do so. By not making determinations on pending compliance plans and ETC petitions, FCC has not implemented a key aspect of the program’s 2012 reforms. This has created a group of carriers that can begin or expand their Lifeline service offerings and a group of carriers that are prevented from entering the marketplace altogether or from expanding to new geographical markets. FCC also faces a backlog for petitions to provide broadband services. As previously discussed, providers seeking to provide Lifeline broadband service must obtain the newly created Lifeline Broadband Provider (LBP) designation from FCC. The 2016 Lifeline Broadband Order states that FCC will take action on LBP designation petitions within 6 months of the submission of a completed filing. By January 2017, FCC had conditionally designated nine ETCs as LBPs, but revoked their LBP designations in February 2017, and returned their LBP petitions to pending status. According to FCC, revoking the designation provides the agency with additional time to consider measures that might be necessary to prevent further fraud, waste, and abuse in Lifeline. In March 2017, the FCC Chairman stated interest in initiating a proceeding to eliminate the new federal LBP designation process. FCC and USAC have limited oversight of Lifeline provider operations and the internal controls used to manage those operations. The current structure of the program relied throughout 2015 and 2016 on over 2,000 ETCs to provide Lifeline service to eligible beneficiaries. These companies are relied on to not only provide telephone service, but also to create Lifeline applications, train employees and subcontractors, and make eligibility determinations for millions of applicants. Federal internal control standards state that management retains responsibility for the performance and processes assigned to service organizations performing operational functions. Consistent with internal control standards, FCC and USAC would need to understand the extent to which a sample of these internal controls are designed and implemented effectively to ensure these controls are sufficient to address program risks and achieve the program’s objectives. However, we identified key Lifeline functions for which FCC and USAC had limited visibility. While FCC approves providers’ participation in Lifeline and USAC conducts audits to ensure providers comply with program rules, we found that they do not have full insight into providers’ operations. For example, we found instances of Lifeline providers utilizing domestic or foreign- operated call centers for Lifeline enrollment. We spoke with officials from two Lifeline carriers and inquired about their operations. One Lifeline provider explained to us that it contracts with a company that then contracts with a back office and a call center in a different country to handle Lifeline operations. Lifeline provider officials told us that individuals at this overseas back office are responsible for reviewing government assistance program documentation and making eligibility determinations for Lifeline applicants. Officials from the other carrier we spoke with told us that they use a third-party contractor located in the United States to verify eligibility. Through our undercover tests, we also found that this company uses an overseas call center to enroll subscribers. When we asked FCC officials about Lifeline providers that outsource program functions to call centers, including those overseas, they told us that such information is not tracked by FCC or USAC. With no visibility over these call centers, FCC and USAC do not have a way to verify whether other such call centers comply with Lifeline rules. Additionally, FCC and USAC have limited knowledge about potentially adverse incentives that providers might offer employees to enroll subscribers. For example, some Lifeline providers pay commissions to third-party agents to enroll subscribers, creating a financial incentive to enroll as many subscribers as possible. Companies responsible for distributing Lifeline phones and service that use incentives for employees to enroll subscribers for monetary benefit increase the possibility of fictitious or ineligible individuals being enrolled into Lifeline. Highlighting the extent of the potential risk for companies, in April 2016 FCC announced approximately $51 million in proposed fines against one Lifeline provider, due to, among other things, its sales agents purposely enrolling tens of thousands of ineligible and duplicate subscribers in Lifeline using shared or improper eligibility documentation. To test internal controls over employees associated with the Lifeline program, we sought employment with a company that enrolls individuals to Lifeline. We were hired by a company and were allowed to enroll individuals in Lifeline without ever meeting any company representatives, conducting an employment interview, or completing a background check. After we were hired, we completed two fictitious Lifeline applications as employees of the company, successfully enrolled both of these fictitious subscribers into Lifeline using fabricated eligibility documentation and received compensation for these enrollments. The results of these tests are illustrative and cannot be generalized to any other Lifeline provider. We plan to refer this company to FCC and USAC for appropriate action as warranted. FCC and USAC also have limited insight into when Lifeline providers do not abide by program rules. As a result, there may be increased risks that Lifeline providers are not adhering to rules. On the basis of our audit and undercover work, we identified instances in which Lifeline providers were applying different policies regarding Lifeline eligibility and enrollment, contrary to program rules. Examples we encountered of Lifeline providers applying the rules incorrectly are noted below. Officials from one provider told us they do not enroll subscribers who reside in a zip code that includes tribal lands, because it is too difficult to confirm the subscribers’ addresses as nontribal. According to Lifeline rules, low-income residents living on tribal lands may be eligible for Lifeline benefits based on either income or participation in federal or tribal assistance programs. Officials for one provider told us that when a subscriber fails the NLAD identity-validation process, they do not use the dispute-resolution system designed by USAC and FCC to verify the subscriber’s identity as required by program rules, because it is too costly. The company opts to not enroll the customer or attempt to verify the customer’s identity using the dispute-resolution system. Customer-service representatives for one provider checked the authenticity of the SSI documentation we provided as evidence of qualifying for Lifeline against a state eligibility database that does not contain SSI information and denied our application. In this case, the representative was seemingly unaware of the contents of the state eligibility database and could potentially disqualify legitimate qualified applicants that use SSI documentation to apply for Lifeline. Variations in Lifeline provider policies and practices could also affect the ability of FCC and USAC to provide oversight of how providers maintain subscriber documentation, which may contain personally identifiable information, in a secure fashion. The risk to consumer information security in Lifeline was highlighted by a security breach and associated FCC enforcement action. In 2013, an investigative reporter alerted two Lifeline providers that documents submitted by Lifeline applicants were being stored on an unprotected Internet site. The providers notified FCC, prompting an investigation. The investigation found that, from September 2012 through April 2013, two Lifeline providers stored sensitive information collected from subscribers to determine Lifeline eligibility in a format readily accessible via the Internet, exposing up to 300,000 subscribers’ information to public view and to identity theft and fraud. This information included full SSNs, names, addresses, and other sensitive information. In October 2014, FCC proposed a penalty of $10 million. FCC’s planned National Verifier may address many of the issues we identified with FCC’s and USAC’s oversight of Lifeline provider operations if it is fully implemented by the current planned date of 2019. FCC officials told us that, as the National Verifier is rolled out, responsibility for eligibility determinations, storage of supporting documentation, and creation of all application forms will transfer to USAC. Additionally, USAC has a process that allows Lifeline subscribers to submit complaints about their service, which could provide USAC with insights into provider operations, but we identified weaknesses in this process. USAC has information on its website informing subscribers to contact their provider if they are experiencing service issues, broken handsets, or billing disputes. If the provider does not resolve the issue, then subscribers are informed to contact their state regulatory commission. After stating an option to contact USAC about the issue, the final option provided is for subscribers to call FCC for assistance. On the basis of our review of complaints recorded in 2014 by USAC, some were closed after USAC referred them back to Lifeline providers without evidence stating that a subscriber’s issue had been addressed. Some subscribers stated that they were having difficulty using Lifeline service, though the individuals’ carriers were potentially billing and receiving funds for these individuals. Other complaints USAC received included service not working and phones that were never received. As previously discussed, we experienced a similar issue with a Lifeline provider approving us for the program, but not providing us with a phone or other method to utilize Lifeline while conducting our undercover testing. USAC told us that it plans to review and revise these processes to improve how it handles customer complaints. USAC further conducts a separate review of Lifeline that provides incomplete visibility over the providers. Specifically, USAC performs Program Quality Assurance (PQA) assessments to determine the improper-payment rate for Lifeline pursuant to federal statute and OMB guidance. The Improper Payments Information Act of 2002, as amended, (IPIA) requires federal agencies to review programs and activities they administer and identify those that may be susceptible to significant improper payments. For programs and activities identified as susceptible, agencies must annually estimate the amount of improper payments, implement actions to reduce improper payments, and report those estimates and actions. IPIA focuses on payments made by a federal agency, contractor, or an organization administering a federal program or activity. We have previously reported that improper payments have consistently been a government-wide issue despite efforts to identify their root causes and reduce them. FCC has determined that IPIA applies to the USF programs and that Lifeline is susceptible to significant improper payments. When conducting PQA reviews for Lifeline, USAC reviews enrollment and recertification forms; FCC Form 497s for accuracy; subscriber listings for completeness; and duplicate subscribers with matching primary address, date of birth, and SSN. Using results of these assessments, USAC calculates estimates of improper-payment rates and provides this information to FCC. According to FCC’s Fiscal Year 2015 Agency Financial Report, the estimated 2015 improper-payment rate reported for Lifeline is 0.45 percent, or $7.3 million. USAC’s reliance on Lifeline providers to determine eligibility and subsequently submit accurate and factual invoices is a significant risk for allowing potentially improper payments to occur, and under current reporting guidelines these occurrences would likely go undetected and unreported. For example, the improper-payment rate resulting from the PQA assessments accounts for duplicate subscribers, missing or incomplete subscriber data, and other factors that identify various types of improper-payments, but does not account for payments made to Lifeline providers that claimed beneficiaries who were not actually enrolled in the qualifying programs or were ineligible. FCC officials told us, however, that FCC and USAC will be better able to include eligibility testing in future year PQA testing given the new Lifeline rules pertaining to the retention of eligibility documentation. FCC officials told us that they have discussed these changes in Lifeline rules with OMB and both parties agree that adding testing procedures in a methodical manner is reasonable and appropriate. FCC Has Inconsistently Penalized Providers with Duplicate Lifeline Subscribers and Has Not Developed an Enforcement Strategy FCC directed USAC in May 2011 to perform in-depth validations (IDV) to uncover duplicative claims for Lifeline support. USAC was to do this by identifying and resolving instances of subscribers who receive simultaneous Lifeline benefits from multiple Lifeline providers and had duplicate subscribers within their own subscriber lists. After identifying providers with duplicate subscribers, FCC was not consistent in the actions it took, as it penalized some but not all of those providers. IDVs were conducted at the state level from 2011 to 2013 on 57 Lifeline providers prior to the implementation of the NLAD database. During this process, USAC contacted subscribers it identified as having duplicate service and advised them that they had to choose a single Lifeline provider. According to information provided by USAC, the IDVs resulted in the identification of approximately 87,000 intracompany duplicate subscribers. Following the IDVs, FCC issued Notices of Apparent Liability that proposed penalties of approximately $94 million to 12 Lifeline providers believed to have willfully and repeatedly violated Lifeline rules by enrolling duplicate subscribers. As of October 2016, FCC had not yet determined the final penalties for these 12 Lifeline providers. We found, however, that FCC proposed penalties inconsistently against Lifeline providers that had duplicate subscribers. For example, USAC’s IDVs determined that 41 Lifeline providers had intracompany duplicates; of these, FCC proposed penalties against 12. In some cases, Lifeline providers that FCC penalized had fewer duplicates than others that were not penalized. For example: One Lifeline provider had received $8,300 in overpayments due to intracompany duplicate subscribers from February through April 2013, and FCC proposed a fine of $3.7 million. Another Lifeline provider received approximately $81,000 in overpayments from intracompany duplicates during the same period and approximately $250,000 in intracompany duplicate overpayments found during the course of the IDV review and FCC did not propose a fine for having duplicate subscribers. FCC proposed a fine to another Lifeline provider of $1.2 million for approximately $8,000 in overpayment of duplicate subscribers and did not propose a fine for another Lifeline provider that had approximately $16,000 in duplicate subscriber overpayments through the IDV process. As a result of FCC’s actions, Lifeline providers that were issued a Notice of Apparent Liability for duplicate subscribers may have been prevented from expanding Lifeline service, while others with duplicates were unaffected. Officials from one Lifeline provider told us that California did not approve their petition to offer Lifeline service in their state because of the penalties levied against them. According to FCC officials, FCC had been unable to issue a Notice of Apparent Liability against some providers because of the statute of limitations and delays in receiving IDV results from USAC. FCC is constrained by a statutory 1-year limitation, which begins when the violation occurs, on assessing forfeitures against carriers for Lifeline rule violations. FCC officials explained that the 1-year limitation has prevented FCC from attempting to assess fines against Lifeline providers when duplicate subscribers or other Lifeline rule violations were discovered near the end of this time frame. FCC told us that when the IDVs were initiated, there was not a formalized process or strategy for how FCC would address Lifeline providers with duplicate subscribers. The FCC proceeded with issuing Notices of Apparent Liability after reviewing the IDV results provided by USAC, though FCC officials were unable to provide us with information on when the results of the IDVs were provided to them by USAC. According to Standards for Internal Control in the Federal Government, management should implement and document control activities through policies. FCC officials told us that the penalties that FCC proposed to levy against Lifeline providers with identified duplicate subscribers were part of a particular enforcement strategy during that time, but they did not provide further details on that strategy. Further, according to FCC officials, in June 2015, the agency did not have a documented enforcement strategy for proposing penalties against Lifeline providers who retain duplicate subscribers. As of March 2017, FCC still does not have a documented enforcement strategy. FCC officials told us that because its Enforcement Bureau lacks resources to take action in all instances, targets for enforcement action are generally prioritized where a problem appears to be pervasive, represents a trend, affects many consumers, or reflects particularly egregious abuse. Grounding that approach in an articulated strategy with a rationale and method for resource prioritization could benefit FCC and the Lifeline providers against which it may choose to take action in the future. For example, an enforcement strategy could help FCC and USAC to allocate resources more effectively so that future IDVs are coordinated and any potential problems identified can be used for enforcement within the 1- year statutory time frame for enforcement actions. In addition, a strategy could help enhance the transparency of reasoning behind any enforcement actions that FCC might take and maximize the effectiveness of enforcement activities. Lifeline’s large and diffuse administrative structure creates a complex internal control environment susceptible to significant risk of fraud, waste, and abuse. FCC’s and USAC’s limited oversight of important aspects of program operations further complicates the control environment— heightening program risk. For example, FCC and USAC have limited knowledge about whether individuals receiving Lifeline benefits are truly eligible and are receiving services from providers prior to paying Lifeline providers, or whether Lifeline providers use the state eligibility databases available to them. Nevertheless, while some academic studies have raised questions whether Lifeline is a costly and inefficient means of achieving universal service, FCC has not evaluated the program to determine whether it is efficiently and effectively meeting its goals, as we recommended in our March 2015 report. In March 2016, FCC expanded the program’s performance goals by including subsidies for broadband service. However, FCC lacks information about the potential impact of the expansion and about the extent to which it is meeting its goals of telephone subscribership, as FCC reported 96 percent of low-income qualifying households already have phone service. The expansion to broadband may face many of the challenges that arose in 2008 when Lifeline expanded to include non-facilities-based wireless service. In light of our findings, we believe that our March 2015 recommendation remains valid and relevant. While FCC established a budget mechanism for the first time in 2016, FCC did not establish requirements for approving any additional Lifeline spending beyond budget levels in a timely manner. If the budget is exceeded in the future, absent a requirement for the Commissioners to review and approve additional spending in a timely manner, up to a year or more could pass before the Commission takes any actions, all of which limits the budget’s ability to control costs. FCC and USAC have taken steps to address issues we have raised about the eligibility of subscribers by improving controls to prevent and detect duplicate enrollment through NLAD. In addition, FCC’s 2016 order establishing a National Verifier, if implemented as planned, could further help to address weaknesses in the eligibility-determination process. In the interim, as evidenced by our data analysis and undercover testing results, relying on thousands of private companies to verify eligibility creates significant risks. Further, providers may not have access or may be unaware of tools available to them to help facilitate such verification. Maintaining and disseminating an updated list of state eligibility databases would better position providers to have and use such information. New challenges may also occur given that the 2016 reform order now allows broadband providers to bypass the state ETC designation process, and instead receive designation from FCC, potentially limiting the states’ ability to guard against waste and abuse. This change is concerning, as our review of FCC’s current ETC designation and compliance plan review process found that FCC has a significant backlog, in part because it has not established time frames for completing such reviews. FCC also does not have documented instructions with criteria for how to evaluate Lifeline compliance plans. Although classified as federal funds, the USF, with net assets of $9 billion, is maintained outside the Treasury in an account with a private bank. As a result, OMB observed that USF funds do not enjoy the same rigorous management practices and regulatory safeguards as funds for other federal programs. In an effort to improve management and oversight of the funds, FCC has developed a preliminary plan to move the funds to the Treasury. While acknowledging that, we note that several years have passed since this issue was brought to FCC’s attention. Further, the preliminary plan would not result in the funds actually being moved to the Treasury until next year, at the earliest, which means the risks that FCC identified will persist and the benefits of having the funds in the Treasury will continue to not be realized in the near term. Moreover, USAC’s ability to provide oversight for the collection and disbursement of billions of dollars of USF funds is complicated by many factors, including the challenge of ensuring that over 6,000 telecommunications carriers pay USF contributions correctly, and do not overcharge USF fees to millions of customers when those fees are passed through to end-users. USAC’s contribution audits were conducted on less than 1 percent of carriers for the period we reviewed, and typically found that carriers collected and contributed incorrect amounts of USF fees. When overpayment of USF fees was identified, FCC did not consistently follow up on audit findings to ensure customers are reimbursed and the overcharges stop. FCC recently initiated a new referral process to help address this issue. When FCC takes action to address program violations, it does so inconsistently, likely because it has not established an enforcement strategy. FCC has also not yet responded to USAC requests for guidance on whether technologies, such as text services, require USF fees. As a result, some carriers collect more from customers and pay more into the fund than others for the same service, though USF fees are required by law to be paid on an equitable and nondiscriminatory basis. Further, when carriers pass through USF charges via line items on customer bills, USAC’s contribution audits do not determine whether the labeling meets FCC Truth-in-Billing rules, which are intended to help ensure customer bills are transparent and appropriately labeled and described to help consumers detect and prevent unauthorized charges. Taking action to address these weaknesses would help FCC address risks we identified. To address control weaknesses and related program-integrity risks we identified in Lifeline, we recommend that the Chairman of FCC require Commissioners to review and approve, as appropriate, spending above the budget in a timely manner; maintain and disseminate an updated list of state eligibility databases available to Lifeline providers that includes the qualifying programs those databases access to confirm eligibility; this step would help ensure Lifeline providers are aware of state eligibility databases and could also help ensure USAC audits of Lifeline providers can verify that available state databases are being utilized to verify subscriber eligibility; establish time frames to evaluate compliance plans and develop instructions with criteria for FCC reviewers how to evaluate these plans to meet Lifeline’s program goals; and develop an enforcement strategy that details what violations lead to penalties and apply this as consistently as possible to all Lifeline providers to ensure consistent enforcement of program violations; the strategy should include a rationale and method for resource prioritization to help maximize the effectiveness of enforcement activities. To address our findings regarding the USF, we recommend that the Chairman of FCC take action to ensure that the preliminary plans to transfer the USF funds from the private bank to the U.S. Treasury are finalized and implemented as expeditiously as possible; require a review of customer bills as part of the contribution audit to include an assessment of whether the charges, including USF fees, meet FCC Truth-in-Billing rules with regard to labeling, so customer bills are transparent, and appropriately labeled and described, to help consumers detect and prevent unauthorized charges; and respond to USAC requests for guidance and address pending requests concerning USF contribution requirements to ensure the contribution factor is based on complete information and that USF pass-through charges are equitable. We provided a draft of this report to FCC and USAC for review and comment. In written comments, reproduced in appendix IV, FCC generally agreed with our recommendations. FCC and USAC both provided technical comments, which we incorporated as appropriate. USAC did not provide written comments on the draft report. In commenting on our recommendations, FCC stated that it agreed with the recommendations or outlined actions it was already taking to address the recommendation. Regarding our recommendation that the commission respond to USAC requests for guidance and address pending requests concerning USF contribution requirements, FCC noted that it has resolved a number of long-standing requests from contributors and expects to address additional questions in the future, which is consistent with what we recommend. However, FCC went on to comment that the commission referred the question of USF contribution reform to the Federal-State Joint Board on Universal Service; thus, these requests for guidance, as well as many of the remaining pending requests from contributors, may be resolved in that proceeding. Moreover, FCC commented that it recognizes the need for administrative efficiency, but must respect the processes of the institutions in place, which are designed to ensure the long-term sufficiency and predictability of the USF. In our report, we noted the steps taken by FCC in attempts to reform and modernize the USF contribution system, including FCC’s 2012 Further Notice of Proposed Rulemaking, and FCC’s 2014 recommendation on contribution reform sought from the Federal-State Joint Board on Universal Service. However, these items have been pending for years, and the USAC guidance requests pertaining to carrier USF fee requirements have been pending as far back as 2009. Therefore, we urge FCC to come to a resolution and respond to USAC requests for guidance in a timely manner and address pending requests concerning USF contribution requirements to ensure the contribution factor is based on complete information and that USF pass-through charges are equitable. Finally, FCC commented on our findings regarding its banking practices surrounding the USF. Specifically, in its letter FCC noted that USF funds currently are maintained in an account with a private bank but that it plans to move them to the Treasury. However, in May 2017, while reviewing a draft of this report, a senior FCC official informed us that FCC had experienced some challenges, such as coordinating across the various entities involved, that raised questions as to when and perhaps whether the funds would be moved as planned. Accordingly, we have revised the report and added a recommendation that FCC ensure that the preliminary plans to transfer the USF funds from the private bank to the Treasury are finalized and implemented as expeditiously as possible. We believe such a recommendation is warranted given the amount of time that has passed since FCC became aware of this issue and given the USF’s $9 billion in net assets, as well as the potential risks and benefits cited by FCC when it initially made the decision to move the funds to the Treasury. We provided FCC and USAC with the revised portions of the report, including the new recommendation, for review and comment. FCC agreed with the additional recommendation and USAC provided no comment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairman of the FCC, the Chief Executive Officer of USAC, and interested congressional committees. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix discusses in detail our methodology for addressing four research questions: (1) the extent to which the Lifeline program (Lifeline) demonstrates effective performance towards program goals; (2) steps the Federal Communications Commission (FCC) and Universal Service Administrative Company (USAC) have taken to improve financial controls in place for Lifeline and the Universal Service Fund (USF), and any remaining weaknesses that might exist; (3) steps FCC and USAC have taken to improve subscriber-eligibility verification, and any remaining weaknesses that might exist; and (4) steps FCC and USAC have taken to improve oversight of Lifeline providers, and any remaining weaknesses that might exist. To explore the extent to which Lifeline demonstrates effective performance towards program goals, we reviewed numerous documents including FCC’s 2012 Reform Order, FCC’s 2016 Modernization Order, and Pew Research Studies cited by FCC in support of expanding Lifeline to include broadband. We followed up on our 2015 work by reviewing two academic studies that evaluated the effect of Lifeline referred to us by FCC. Our prior work determined these academic studies met our criteria for methodological quality. We also gained the perspective of a range of stakeholders through interviews with program agency officials (FCC’s Enforcement Bureau, Wireline and Competition Bureau, Office of Managing Director, and Office of General Counsel); officials from Lifeline’s program administrator (USAC); state officials (National Association of Regulatory Commissioners); and representatives from an advocacy group with members representing more than 200 national organizations (Leadership Conference on Civil and Human Rights); two of the largest Lifeline providers according to annual disbursements received from Lifeline; and two telecommunications law firms representing numerous Lifeline providers. To determine the steps taken by FCC and USAC to improve financial controls in place for Lifeline and USF, and any remaining weaknesses that might exist, we examined USAC financial data, including USF bank- account statements, payment data, and financial reports. We performed a walk-through of USAC’s processes to enter and approve Lifeline providers and administer USF disbursements. We analyzed 74 USF contribution audits conducted with audit periods in calendar years 2007 through 2013 (approximately the past 5 years of contribution audits issued as of the time we requested them in December 2015). We reviewed USAC guidance requests; FCC Office of Inspector General reports; FCC orders, policies, and other key guidance; and Treasury guidance on fiscal policy. We interviewed officials from USAC’s Internal Audit Division, a USF account manager and attorney with the private bank that holds USF, as well as officials from the U.S. Treasury, Bureau of the Fiscal Service, and FCC’s Office of Inspector General. We also attended USAC board meetings. To evaluate the steps FCC and USAC have taken to improve subscriber- eligibility verification, and any remaining weaknesses that might exist, we performed data analysis to identify potential improper payments using Lifeline’s National Lifeline Accountability Database (NLAD) and other beneficiary databases, conducted covert testing of Lifeline providers while posing as Lifeline applicants, reviewed documentation discussing subscriber-validation and eligibility controls, and interviewed officials from FCC and USAC. To identify potential improper payments, our Lifeline subscriber data analysis determined whether Lifeline subscribers who reported qualifying for the program due to participation in another federal program were enrolled in the specific programs recorded in NLAD. We obtained NLAD data in November 2014. The data contained a snapshot of enrolled Lifeline subscribers as of that date. We selected the three largest qualifying programs identified by FCC to test the eligibility of subscribers in NLAD; the U.S. Department of Agriculture’s Supplemental Nutrition Assistance Program (SNAP), the Department of Health and Human Services’ Medicaid program, and the Social Security Administration’s (SSA) Supplemental Security Income (SSI) program. We obtained nationwide SSI eligibility data from SSA and obtained SNAP and Medicaid data from selected states as these two programs’ data are maintained at the state level. Specifically, we obtained SNAP eligibility data from five states and Medicaid eligibility data from six states. As a result, we obtained data from a nongeneralizable selection of states. Our state selections were selected based on the highest dollar amount of 2013 nontribal Lifeline disbursements and were selected to include states that do and do not have a third-party administrator that can make eligibility determinations and states that do and do not have an eligibility database that can be used by Lifeline providers to validate eligibility in a qualifying Lifeline program. We obtained SNAP data from Florida, Georgia, Michigan, New York, and Ohio. Lifeline providers in these states received the largest Lifeline disbursements of NLAD participating states. We identified Florida as a state with a third-party administrator that verifies eligibility. We identified Georgia, Michigan, and New York as states with an eligibility database that can be used to validate enrollment in a Lifeline qualifying program. Ohio did not have an eligibility database or third-party administrator at the time of our state selection. We utilized Center for Medicare & Medicaid Services (CMS) Medicaid Statistical Information System (MSIS) data to obtain Medicaid eligibility information from Florida, Georgia, Michigan, Nebraska, New York, and Ohio. Nebraska was another state identified as using a third-party administrator to verify eligibility, and was selected as an alternative to Florida because the Florida Medicaid data at the time of our state selection were only validated through 2011. However, during the course of our audit, Florida validated Medicaid data that met our review time frame. Consequently, both states were included in our analysis of Medicaid eligibility data. To assess the reliability of the different datasets, we interviewed officials from agencies responsible for their respective databases to discuss data- related considerations and performed electronic testing to determine the validity of specific data elements in the federal and selected state databases that we used to perform our work. We also reviewed related documentation, including data layouts and information on database controls. On the basis of our discussions, documentation review, and our own electronic testing of the data, we concluded that the data fields used for this report were sufficiently reliable for the purpose of this engagement. However, we did identify issues in the NLAD data that suggested the potential for data-entry errors (such as a February 30 birthdate). We excluded cases that were clearly in error from our analysis. We utilized the most up-to-date SNAP and MSIS data available at the time of our analysis. The six states selected for our Medicaid analysis had eligibility dates from the third quarter of 2012 through the most recent eligibility fiscal quarter available for each state—at the time of our data request—which ranged from the third quarter of 2012 to the fourth quarter of 2014. Specifically, Medicaid eligibility data for Florida and Michigan were available through September 2013; for Nebraska and Ohio, through December 2013; and for Georgia and New York, through September 2014. For our analysis of NLAD and Medicaid data, we only matched against Lifeline subscribers that enrolled prior to the latest Medicaid eligibility data available for each state. States can take up to 3 years to adjust their Medicaid data, and as a result beneficiaries can be included or excluded retroactively. Because Medicaid data are collected and maintained by the states, the consistency, quality and completeness of the data can vary from state to state. Our nationwide SSI eligibility data ranged from October 2012 to December 2014, and each of the five selected states’ SNAP data ranged from October 2013 to December 2014. Therefore, it was not necessary to exclude any Lifeline subscribers prior to matching. In the event that any of the Lifeline subscribers were only shown as eligible for the month of December 2014, they were nevertheless counted as a match and deemed likely eligible for Lifeline, even though NLAD data were only as of November 2014. To ensure that our tabulations of unconfirmed eligibility are not overstated, we excluded any Lifeline subscribers that were enrolled in NLAD after the date range available for our review for each qualifying program. For example, if NLAD showed a subscriber enrolled in Lifeline in July 2014 and the corresponding date range for the qualifying program we reviewed had enrollment data only through December 2013, then this subscriber was excluded from our matching results. To further prevent the possibility of overstating unconfirmed eligibility, we counted subscribers as likely eligible for Lifeline if the Lifeline subscriber was enrolled in the qualifying programs at any time within the range of dates provided to us for each qualifying program we reviewed. For example, if NLAD shows a subscriber enrolled in April 2014, but was not enrolled in the qualifying program until June 2014, it was nevertheless counted as a match and that the subscriber was likely eligible for Lifeline. As a result, we are likely understating the unconfirmed match rate as some individuals may have enrolled in the qualifying program after the Lifeline enrollment date. However, given the potential for data-entry errors in NLAD, there is also potential for overstatement of unconfirmed eligibility. We conducted work to determine that each subscriber was enrolled in a Lifeline qualifying program. To do this, we matched NLAD data to the SNAP, Medicaid, and SSI data to identify potential improper payments. We compared the enrolled Lifeline subscriber identity information recorded in NLAD as of November 2014 to the SNAP, Medicaid, and SSI eligibility data. For the purpose of our analysis, we considered a subscriber in NLAD to be a likely match and enrolled in SNAP if at least four of the following fields matched between NLAD and SNAP data from each state: subscriber first name; subscriber last name; subscriber date of birth; last four digits of the subscriber’s Social Security Number (SSN); and an exact address, zip-code, and state match. We considered a subscriber listed in NLAD to be a likely match and enrolled in SSI if the subscriber first name, last name, date of birth, and last four digits of the SSN matched exactly with the SSI program data. To ensure that our tabulations of unconfirmed eligibility do not overstate potential problems with the data, for SNAP and SSI we counted first and last name matches with inexact, but similar, spelling to be a likely match and enrolled in the qualifying programs. Whereas, for Medicaid, we considered a subscriber listed in NLAD as a likely match enrolled in the qualifying program if the date of birth, last four digits of the SSN, and zip code matched exactly with Medicaid data for each state because the Medicaid data we utilized did not contain beneficiary first or last name information. As a result of not using first or last name, our Medicaid matching may understate unconfirmed eligibility for Medicaid. We also matched NLAD data against the SSA’s Death Master File (DMF) to identify subscribers that were listed as deceased at least 1 year prior to their initial Lifeline enrollment or required annual Lifeline recertification. To ensure that our tabulations of those Lifeline subscribers showing deceased in the DMF were not overstated, we required an exact match between NLAD and the DMF for the following four fields: first name, last name, date of birth, and last four digits of the SSN. The results of our data matching are not generalizable to any other state or qualifying Lifeline program. It is not possible to determine from data matching alone whether these matches definitively identify recipients who were not eligible for Lifeline benefits without reviewing the facts and circumstances of each case. For example, we could not identify based on the data alone whether there were data-entry errors at the time of enrollment incorrectly stating the qualifying Lifeline program presented by the subscriber at the time of enrollment. Alternatively, our matches may also understate the number of deceased individuals receiving assistance because matching would not detect Lifeline subscribers whose identifying information in the Lifeline qualifying program data differed slightly from their identifying information in NLAD. To test subscriber controls and the vulnerability of improper payments, we also conducted undercover testing of 19 Lifeline providers to determine whether we could obtain Lifeline service using fictitious eligibility documentation. We selected these providers based on the providers with the largest 2014 Lifeline disbursements that allowed us to apply electronically, through telephone, fax, or mail. We submitted 21 Lifeline benefit applications or otherwise attempted to obtain service using false information and fabricated supporting documents. These undercover tests were for illustrative purposes and are not generalizable. We also reviewed FCC’s Lifeline Reform and Modernization Orders, FCC and USAC documentation discussing subscriber controls, FCC guidance, and Lifeline enforcement actions and proposed penalties for violations of Lifeline rules. To determine the steps FCC and USAC have taken to improve oversight of Lifeline providers, and any remaining weaknesses that might exist, we met with officials from FCC, FCC’s Office of Inspector General, USAC, and two Lifeline providers. We reviewed FCC documentation, including Lifeline Reform Orders, Lifeline provider enforcement actions, and required Eligible Telecommunications Carrier (ETC) petitions and Lifeline compliance plans. We reviewed USAC documentation, including audits conducted by USAC and certified public-accounting firms, Lifeline subscriber complaints, and work performed to identify duplicate subscribers. We reviewed information on 93 USAC Lifeline Beneficiary and Contributor Audit Program (BCAP) audits. We also analyzed reports released by the FCC Office of Inspector General. We conducted this performance audit from June 2014 to May 2017 in accordance with Generally Accepted Government Auditing Standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Beneficiary and Contributor Audit Program (BCAP) audits are conducted on each Universal Service Administrative Company (USAC) program in accordance with Generally Accepted Government Auditing Standards, with their primary purpose to ensure compliance with Federal Communications Commission (FCC) rules and program requirements, and to assist in program compliance. As part of these audits, USAC determines whether the number of Lifeline subscribers that providers claim for reimbursement can be supported by the providers’ internal records. The scope of these audits does not include work to determine whether Lifeline service was working for subscribers, or to determine the extent of any service issues and how many potential subscribers could be affected. USAC officials told us that, before fiscal year 2016, many of the audited entities were randomly selected, and the selection process was designed to provide a wide variety of entities with regard to size and geographic location. Starting in fiscal year 2016, USAC implemented a risk-based selection method to audit the entities with the greatest risk. A small percentage of Lifeline providers and Lifeline disbursements undergo BCAP audits. Of the 93 BCAP Lifeline audits with audit periods covering Lifeline disbursements from 2010 to 2014, 13 were of providers that received less than $1,000 in support during the period reviewed by USAC. In its 2012 Reform Order, FCC directed USAC to audit new carriers within the first year they begin receiving federal low-income Universal Service Fund (USF) support. FCC concluded that an initial audit will aid efficient administration of the program by confirming early on that the new Eligible Telecommunications Carriers (ETC) are providing Lifeline service in accordance with program requirements. According to USAC, many of these required audits were of carriers with nominal subscribers, and thus, in receipt of nominal disbursements. Table 4 below illustrates the audit coverage from BCAP audits from 2010 to 2014 and displays the percentage of carriers that were audited and the percentage of the total USAC Lifeline provider disbursement during these periods. The audit findings for the audits we reviewed found that some carriers were not complying with Lifeline rules in some capacity, such as inaccurate Lifeline subscriber claim reporting, inaccurate recertification reporting, and lack of required subscriber certification documentation. As part of the BCAP audit, USAC officials stated they generally review a Lifeline provider’s operations in one or two states during a 1-month period of time regardless of how many states the provider operates in. USAC officials told us that when it notes a material issue that could impact the program from a holding company level, the audit work is expanded. For example, during an audit of one provider, USAC found the company was failing to deenroll subscribers, which led to a $10.9 million forfeiture assessed by FCC. In addition to the contact named above, Dave Bruno (Assistant Director), Scott Clayton (Analyst-in-Charge), and Daniel Silva made key contributions to this report. Other contributors include Maurice Belding, Gary Bianchi, Clayton Clark, Julia DiPonio, Michelle Duren, Colin Fallon, Robert Graves, Scott Hiromoto, Mary Catherine Hult, Mitch Karpman, Lauren Kirkpatrick, Barbara Lewis, George Ogilvie, Joshua Parr, Ramon Rodriguez, and Julie Spetz. | Created in the mid-1980s, FCC's Lifeline provides discounts to eligible low-income households for home or wireless telephone and, as of December 2016, broadband service. Lifeline reimburses telephone companies that offer discounts through the USF, which in turn is generally supported by consumers by means of a fee charged on their telephone bills. In 2016, Lifeline disbursed about $1.5 billion in subsidies to 12.3 million households. In 2010, GAO found Lifeline had limited abilities to detect and prevent ineligible subscribers from enrolling. FCC adopted a reform order in 2012 to enhance Lifeline's internal controls. GAO was asked to examine FCC's reforms. This report discusses, among other objectives, (1) the extent to which Lifeline demonstrates effective performance towards program goals, and (2) steps FCC and USAC have taken to enhance controls over finances, subscribers, and providers, and any weaknesses that might remain. GAO analyzed documents and interviewed officials from FCC and USAC. GAO analyzed subscriber data from 2014 and performed undercover tests to identify potential improper payment vulnerabilities. The results of GAO's analysis and testing are illustrative, not generalizable. The Federal Communications Commission (FCC) has not evaluated the Lifeline program's (Lifeline) performance in meeting its goals of increasing telephone and broadband subscribership among low-income households, but has recently taken steps to do so. Lifeline participation rates are low compared to the percentage of low-income households that pay for telephone service, and broadband adoption rates have increased for the low-income population even without a Lifeline subsidy. Without an evaluation, which GAO recommended in March 2015, FCC is limited in its ability to demonstrate whether Lifeline is efficiently and effectively meeting its program goals. In a July 2016 Order, FCC announced plans for an independent third party to evaluate Lifeline design, function, and administration by December 2020. FCC and the Universal Service Administrative Company (USAC)—the not-for-profit organization that administers Lifeline—have taken some steps to enhance controls over finances and subscriber enrollment. For example, FCC and USAC established some financial and management controls regarding billing, collection, and disbursement of funds for Lifeline and related programs. To enhance the program's ability to detect and prevent ineligible subscribers from enrolling, FCC oversaw completion in 2014 of a database with a real-time list of subscribers to assist carriers in identifying and preventing duplicate subscribers. Additionally, in June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. Nevertheless, GAO found weaknesses in several areas. For example, Lifeline's structure relies on over 2,000 Eligible Telecommunication Carriers that are Lifeline providers to implement key program functions, such as verifying subscriber eligibility. This complex internal control environment is susceptible to risk of fraud, waste, and abuse as companies may have financial incentives to enroll as many customers as possible. Based on its matching of subscriber to benefit data, GAO was unable to confirm whether about 1.2 million individuals of the 3.5 million it reviewed, or 36 percent, participated in a qualifying benefit program, such as Medicaid, as stated on their Lifeline enrollment application. FCC's 2016 Order calls for the creation of a third-party national eligibility verifier by 2019 to determine subscriber eligibility. Further, FCC maintains the Universal Service Fund (USF)—with net assets exceeding $9 billion, as of September 2016—outside the Department of the Treasury in a private bank account. In 2005, GAO reported that FCC should reconsider this arrangement given the USF consists of federal funds. In addition to addressing any risks associated with having the funds outside the Treasury, where they do not enjoy the same rigorous management practices and regulatory safeguards as other federal programs, FCC identified potential benefits of moving the funds. For example, by having the funds in the Treasury, USF payments could be used to offset other federal debts, and would provide USAC with better tools for fiscal management of the funds. In March 2017, FCC developed a preliminary plan to move the USF to the Treasury. Until FCC finalizes and implements its plan and actually moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. GAO makes seven recommendations, which FCC generally agreed with, including that FCC take action to ensure the preliminary plans to transfer the USF from a private bank to the U.S. Treasury are finalized and implemented expeditiously. |
In response to concerns about the nation’s dependence on imported oil, Congress enacted the RFS program as part of the Energy Policy Act of 2005. This initial RFS required that a minimum of 4 billion gallons of biofuels be used in 2006, rising to 7.5 billion gallons by 2012. Two years later, the Energy Independence and Security Act of 2007 (EISA) expanded the biofuel target volumes and extended the ramp-up through 2022 establishing overall target volumes for biofuels that increase from 9 billion gallons in 2008 to 36 billion gallons in 2022. The EISA volumes can be thought of in terms of two broad categories: conventional and advanced biofuels: Conventional biofuel: Biofuels from new facilities must achieve at least a 20-percent reduction in greenhouse gas emissions, relative to 2005 baseline petroleum-based fuels. The dominant biofuel produced to date is conventional corn-starch ethanol, although recently some conventional biodiesel has entered the fuel supply. Advanced biofuel: Biofuels, other than ethanol derived from corn starch must achieve at least a 50-percent reduction in life-cycle greenhouse gas emissions, as compared with 2005 baseline petroleum-based fuels. This is a catch-all category that may include a number of fuels, including fuels made from any qualified renewable feedstock that achieves at least a 50- percent reduction in lifecycle greenhouse gas emissions, such as ethanol derived from cellulose, sugar, or waste material. This category also includes the following: Biomass-based diesel: Advanced biomass-based diesel must have life-cycle greenhouse gas emissions at least 50 percent lower than traditional petroleum-based diesel fuels. Cellulosic biofuel: Advanced biofuel derived from any cellulose, hemicellulose, or lignin that is derived from renewable biomass must have life-cycle greenhouse gas emissions at least 60 percent lower than traditional petroleum-based fuels. This category of fuel may include cellulosic ethanol, renewable gasoline, cellulosic diesel, and renewable natural gas from landfills that can be used to generate electricity for electric vehicles or used in vehicles designed to run on liquefied or compressed natural gas. The RFS required the annual use of 4 billion gallons of overall biofuels in 2006, rising to 36 billion gallons in 2022, with at least 21 billion gallons from advanced biofuels, effectively capping the volume of biofuels (primarily conventional, or corn-starch, ethanol) that may be counted toward the overall 2022 target of 15 billion gallons. EPA administers the RFS in consultation with DOE and USDA. EPA’s responsibilities for implementing the RFS include setting annual volume requirements and, in doing so, using its waiver authority to reduce statutory volume targets, if warranted. As figure 1 shows, the structure of the volume targets allowed for blending of conventional corn-starch ethanol in the early years covered by the statute while providing lead time for the development and commercialization of advanced, and especially cellulosic, biofuels. However, these fuels have not been produced in sufficient quantities to meet statutory targets through 2016. As a result, since 2010, EPA has used its waiver authority to deviate from the statutory target volumes and has reduced the volume requirement for cellulosic biofuel every year, citing inadequate domestic supply, among other things (see fig.2). In December 2015—when EPA finalized the volume requirements for 2014, 2015, and 2016—the agency reduced the total renewable fuel requirement for those years. Effectively, this meant that EPA reduced the amount of conventional biofuels required under the program relative to statutory targets for those years. Similarly, in the volume requirement proposed in May 2016, EPA also proposed reducing the total renewable fuel requirement for 2017 compared with the target volumes in the statute: from 24 to 18.8 billion gallons (see fig.3). In both cases, EPA cited constraints in the fuel market’s ability to accommodate increasing volumes of ethanol. EPA’s use of this waiver authority has been controversial among some RFS stakeholders, and EPA’s 2015 requirement currently faces legal challenges from multiple parties. EPA’s responsibilities for the RFS also include determining companies’ compliance with the RFS. EPA regulates compliance with the RFS using a credit system. Companies in the United States that refine or import transportation fuel must submit credits—called renewable identification numbers (RIN)—to EPA. Companies with such an obligation are known as “obligated parties.” The number of RINs that an obligated party must submit to EPA is proportional to the volume of gasoline and diesel fuel that the obligated party produces or imports and depends on the total volume requirement EPA sets for the year in question. In accordance with EPA guidelines, a biofuel producer or importer assigns a unique RIN to a gallon of biofuel at the point of production or importation. When biofuels change ownership (e.g., are sold by a producer to a blender), the RINs generally transfer with the fuels. When a gallon of biofuel is blended or supplied for retail sale, the RIN is separated from the fuel and may be used for compliance or traded, sold, or held for use in the following year. Since biofuels supply and demand can vary over time and across regions, a market has developed for trading RINs. If a supplier has already met its required share and has supplied surplus biofuels for a particular biofuel category, it can sell the extra RINs to another entity or it can hold onto the RINs for future use. An obligated party that faces a RIN deficit can purchase RINs to meet its obligation. Since the establishment of the RFS, conventional corn-starch ethanol is the biofuel that has most often been blended with gasoline. After production, ethanol is blended into the gasoline either by the wholesale distributor or at the retail pump, with both requiring specialized tanks and pumping equipment. Retailers sell specific blends of gasoline and ethanol: E10 (up to 10 percent ethanol); E85 (51 to 85 percent ethanol); and, less typically, E15 (15 percent ethanol). E10 is the most widely used blend, representing the overwhelming majority of gasoline sales in the United States. The E85 blend is specifically used by flex fuel vehicles. Currently, there are relatively few of these automobiles in the United States, and E85 stations are located primarily in the Midwest. The sale of E15 blend is even less common than that of E85. For both E85 and E15, developing retail pump infrastructure has been a focus of USDA’s Biofuel Infrastructure Partnership which, beginning in 2015, has made $100 million available in matching grants in 21 states to install nearly 5,000 new retail pumps. In the years since the RFS was established, U.S. oil imports have decreased. Several factors contributed to the decrease in reliance on imported oil, including the use of E10 brought about by the RFS. However, other factors contributed more significantly to the decrease. According to an April 2015 DOE report, at the same time that U.S. oil production was growing, U.S. oil consumption, and particularly consumption of gasoline, was falling. A number of factors led to the decrease in consumption, including historic fuel economy standards for light and heavy vehicles in recent years. It is unlikely that the goals of the RFS will be met as envisioned because there is limited production of advanced biofuels to be blended into domestic transportation fuels and limited potential for expanded production by 2022. In the absence of advanced biofuels, most of the biofuel blended under the RFS to date has been conventional corn-starch ethanol, which achieves smaller greenhouse gas emission reductions compared with advanced biofuels. In addition, further reliance on ethanol to meet expanding RFS requirements is limited by incompatibility of ethanol blends above E10 with existing vehicle fleet and fueling infrastructure. It is unlikely that the goals of the RFS—to reduce greenhouse gas emissions and expand the nation’s renewable fuels sector—will be met as envisioned because there is limited production of advanced biofuels to be blended into domestic transportation fuels and limited potential for expanded production by 2022. As we report in GAO-17-108, advanced biofuels are technologically well understood, but current production is far below the volume needed to meet the statutory targets for these fuels. For example, the cellulosic biofuel blended into transportation fuel in 2015 was less than 5 percent of the statutory target of 3 billion gallons. Given current production levels, most experts we interviewed told us that advanced biofuel production cannot achieve the statutory targets of 21 billion gallons by 2022. The shortfall of advanced biofuels is the result of high production costs, despite years of federal and private research and development efforts. The RFS was designed to bring about reductions in greenhouse gas emissions by blending targeted volumes of advanced and, in particular, cellulosic, biofuels, because those fuels achieve greater greenhouse gas reductions than conventional corn-starch ethanol and petroleum-based fuel. However, because advanced biofuel production is not meeting the RFS’s targets, the RFS is limited in its ability to meet its greenhouse gas reduction goals as envisioned. According to several experts we interviewed, the investments and development required to make these fuels more cost-effective, even in the longer run, are unlikely in the current investment climate, in part because of the magnitude of investment and the expected long time frames required to make advanced biofuels cost-competitive with petroleum-based fuels. In the absence of advanced biofuels, most of the biofuel blended under the RFS to date has been conventional corn-starch ethanol, which achieves smaller greenhouse gas emission reductions than advanced biofuels. As stated above, the use of corn-starch ethanol has been effectively capped at 15 billion gallons. As a result, further expansion of biofuels use will require increasing cellulosic biofuels and, according to report’s companion report (GAO-17-108), the most likely cellulosic biofuel to be commercially produced in the near- to midterm will be cellulosic ethanol. However, reliance on adding more ethanol to the transportation fuel market to meet expanding RFS requirements is limited by the incompatibility of ethanol blends above E10 with the existing vehicle fleet and fueling infrastructure. Many experts and stakeholders refer to this infrastructure limitation as the “blend wall.” If ethanol continues to be the primary biofuel produced to meet the RFS, these infrastructure limitations will have to be addressed. Specifically with regard to the existing vehicle fleet, some experts told us that for most vehicles sold in the United States before 2015, the owner’s manuals and warranties indicate that the vehicles should not use ethanol blends above 10 percent because of concerns about engine performance. Since 2011, EPA has issued waivers to the Clean Air Act allowing automobiles and light-duty trucks from model year 2001 and after to run on E15. However, many auto manufacturers contest this waiver, stating that automobile owners should follow their owner’s manuals. The possibility that using higher blends of ethanol than E10 will cause vehicle warrantees to be void may be reducing demand for these higher blends of ethanol. Flex fuel vehicles, which can run on ethanol blends up to E85, have entered the vehicle fleet but, as of 2016, were less than 10 percent of the total vehicle fleet, which may also limit the potential demand for higher blends of ethanol. Further, several experts told us there is little demand from the public for E85 because the fuel offers lower gas mileage than E10 or E15 and prices of E85 do not reflect the need to refuel more frequently. Some experts told us that the demand for E85 has not been truly tested because the public (including owners of flex fuel vehicles) is largely undereducated about E85. With regard to the fueling infrastructure, some experts stated that ethanol blends higher than E10 are largely incompatible with existing distribution and retail fueling tanks and pumps in the United States and that there are few incentives for fuel distributors and retailers to make the changes that would be needed to accommodate higher blends. Retail sale of these higher blends faces three key challenges: Compatibility. Ethanol blends higher than E10 may degrade or damage some materials used in existing underground storage tank systems and dispensing equipment such as pumps, potentially causing leaks. Cost. Because of concerns over compatibility, new storage and dispensing equipment may be needed to sell intermediate blends at retail outlets. The cost of installing a single-tank underground storage system compatible with intermediate blends is more than $100,000. In addition, the cost of installing a single compatible fuel dispenser is over $20,000. Liability. Since EPA has authorized E15 for use in model year 2001 and newer automobiles—but not for pre-2001 vehicles or nonroad engines—many fuel retailers are concerned about potential liability issues if consumers mistakenly use e15 in their older automobiles or nonroad engines. Several experts raised concerns about the extent to which the RFS is achieving its targeted greenhouse gas emissions reductions, given that most biofuel blended under the RFS is corn-starch ethanol. More specifically, some experts were critical of the life-cycle analysis EPA used to determine the greenhouse gas emissions reductions for corn-starch ethanol. This criticism focuses on whether the model accurately accounts for all greenhouse gas emissions in the corn-starch ethanol production process. Some experts said that EPA’s life-cycle analysis is flawed because it does not sufficiently include indirect land use change. Further, as previously stated in this report, corn-starch ethanol plants that were in operation or under construction before December 19, 2007, were not subject to the requirement to reduce greenhouse gas emissions by at least 20 percent. According to an August 2016 EPA Inspector General report, grandfathered production that is not subject to any greenhouse gas reduction requirements was estimated to be at least 15 billion gallons, or over 80 percent of today’s RFS blending volume. Moreover, some experts noted that under the RFS, because these facilities are grandfathered, they have no incentive to lower their greenhouse gas emissions. Some experts told us that the RFS creates a perverse incentive to import Brazilian sugarcane ethanol. Specifically, because sugarcane ethanol qualifies as an advanced biofuel, it is more profitable to import this fuel than to domestically produce advanced biofuels. According to these experts, the import of sugarcane ethanol, which occurs to meet RFS requirements, causes significant greenhouse gas emissions as a result of fuel burned during shipping. While advanced biofuels are not likely to be produced in sufficient quantities to meet the statutory targets, experts identified actions that they suggested could incrementally improve investment in advanced biofuels and may lead to greater volumes of these fuels being produced and used in the longer term. In addition, experts identified actions to increase compatibility of infrastructure with higher ethanol blends. Experts identified actions that they suggested could incrementally improve the investment climate for advanced biofuels and possibly encourage the large investments and rapid development required to make these fuels more cost-effective. Addressing uncertainty about the future of the RFS: Many experts told us that uncertainty about the future of the RFS is limiting investment in advanced biofuels. In particular, some experts stated that the possibility of a repeal of the RFS has caused potential investors to question whether the RFS will continue to exist until 2022 and beyond. According to these experts, however, in the current political climate little can be done to address the threat of a repeal of the RFS. EPA may be able to improve the investment climate for advanced biofuels by clarifying its plans for managing the program in upcoming years. Specifically, statutory volume targets have been set through 2022. After that, EPA will be responsible for setting these volumes. One expert said that if EPA provided more insight into its plans for setting post-2022 volume targets, it could reduce some of this investment uncertainty. Further, the annual requirement that EPA finalized in 2015 triggered what is commonly referred to as the “reset provision” of the RFS for the advanced biofuel and cellulosic biofuel categories. The reset provision requires EPA to modify the statutory volume targets for future years if certain conditions are met (see sidebar). Although the statute provides factors for EPA to consider when modifying these volumes, EPA has not specified how it will approach setting volumes under this reset provision. As a result, several experts thought that uncertainty about the volumes of advanced and cellulosic biofuels affected by the reset may be limiting investments in these fuels. Some experts thought that EPA should clarify how it will implement the reset to reduce negative impacts on investments in advanced biofuels. EPA officials told us that recent annual volume requirements make EPA’s intent clear in the near term. Providing more consistent subsidies to advanced biofuel producers. Some experts stated that the Second Generation Biofuel Producer Tax Credit—an incentive to accelerate commercialization of fuels in the advanced and cellulosic biofuels categories—has expired and been reinstated (sometimes retroactively) about every 2 years, contributing to uncertainty among cellulosic fuel producers and investors. These experts told us that investment in cellulosic biofuels could be encouraged, in part, by maintaining the Second Generation Biofuel Producer Tax Credit consistently, rather than allowing it to periodically lapse and be reinstated. Specifically, one expert suggested three major changes to the advanced biofuel tax credits: Extending the tax credit long term (e.g., 10 years) to provide investors sufficient investment return certainty for the large investment of building a biofuel plant until a cumulative level of second generation biofuel has been produced and costs have fallen. Making the producer tax credit refundable to guarantee that biofuel producers receive the subsidy in the early years when they are carrying losses. Coupling the producer tax credit with an investment tax credit to decrease capital costs and improve the financial incentives for building cellulosic biofuel plants. Expanding the types of fuel that qualify for the RFS. The current RFS framework specifies that qualifying biofuels must be derived from biomass-based feedstocks. According to some experts, this excludes some types of low carbon fuels from qualifying under the RFS. One example provided by experts is a process that uses microbes that capture carbon from industrial sources—such as the waste gas emitted from steel production—to produce a biofuel with lower greenhouse gas emissions than petroleum-based fuels. However, because this fuel is not derived from renewable biomass, it does not qualify for any RFS category. According to these experts, expanding the RFS to include fuel types such as this would better incentivize investment in innovative technologies. Reducing RIN fraud and price volatility. Some experts said that a lack of transparency in the RIN trading market has led to an increased risk of fraud and increased volatility of RIN prices. This has caused uncertainty among potential investors. RIN fraud. From the beginning of the RFS program, there have been concerns surrounding RIN generation and the RIN market. Because RINs are essentially numbers in a computerized account, there have been errors and opportunities for fraud, such as double counting RINs or generating RINs for biofuels that do not exist. To address concerns over these issues, EPA established an in-house trading system called the EPA Moderated Transaction System (EMTS). However, EPA has maintained that verifying the authenticity of RINs is the duty of obligated parties. Under this “buyer beware” system, those purchasing or receiving RINs must verify the RINs’ validity on their own, and they are responsible for any fraudulent RINs they sell or submit to EPA for compliance. However, fraud cases in the last few years have raised questions about whether this “buyer beware” system is sufficient to deter fraud. Furthermore, obligated parties that inadvertently purchase fraudulent RINs lose the money spent to purchase them, must purchase additional RINs to meet their obligations, and face additional costs. This has a disproportionate effect on small refiners: whereas large obligated parties—in particular, vertically integrated refiners that typically own blending operations— can generate RINs by blending fuel, small refiners do not blend fuel and must purchase their RINs on the market to meet their obligations and are therefore more likely to be adversely affected by fraudulent RINs. RIN price volatility. Further, according to some experts, price volatility in RIN markets adversely affects small refiners in particular and leads to uncertainty among investors. While most RINs are bought and sold through private contracts registered with the EMTS, there are also spot markets for RINs. Some experts told us that price volatility may be due, in part, to nonobligated parties speculating in these spot markets. According to one expert, the current system leaves small refiners disproportionately exposed to RIN price fluctuations because they must purchase their RINs on the market, as previously discussed. Such price fluctuations introduce uncertainty for small refiners about the costs of compliance with the RFS. These concerns about RIN fraud and price volatility have led to uncertainty among potential investors. Some experts told us that EPA should make RIN market trading more open and transparent like other commodity markets, which could reduce the potential for fraudulent RIN activities and reduce RIN price volatility. EPA officials told us that EPA has recently begun to publish aggregated data on RIN transactions and biofuel volume production on its website in an effort to make the RIN market more transparent. However, it is too early to know how effective this will be in addressing fraud and price volatility. Several experts suggested that expanding grants to encourage infrastructure improvements, such as USDA’s Biofuel Infrastructure Partnership, could increase both the availability and competitiveness of higher blends at retail stations nationwide. Currently through this partnership, USDA is investing $100 million to install nearly 5,000 pumps offering high ethanol blends in 21 states. Some experts also said that blender pumps are not being installed with the density required to test demand. One expert suggested that, instead of installing blender pumps at all the stations of a certain brand in a region, blender pumps should be installed at all the stations at a specific road intersection. That way, these stations would be forced to compete with each other, which this expert told us would result in more competitive prices at the pump and increased incentives to make improvements to fueling infrastructure. Further, one expert suggested that dealers educate consumers about flex fuel vehicle features when the vehicles are delivered, as dealers previously did when on-board diagnostic, check engine light, and Bluetooth synchronizing features were introduced. Under these conditions, demand for higher ethanol blends—and E85, in particular—could be better tested. In response to these concerns and suggestions, a USDA official told us that, while it is not mandatory that installations meet a required geographic density, many of the blender pumps could be installed on highway corridors, which could encourage competition. This official also told us that, in addition to expanding infrastructure for higher ethanol blends, the Biofuel Infrastructure Partnership will be able to provide data associated with testing demand for E85, including pricing, consumer education, and next steps for the program. In addition, in October 2016, EPA proposed an update to fuel regulations to allow expanded availability of high ethanol fuel blends for use in flex fuel vehicles. EPA is proposing revisions to its gasoline regulations to make it clear that E16 through E83 fuel blends are not gasoline, and hence not fully subject to gasoline quality standards. EPA believes these revisions will increase demand for higher ethanol blends. Some experts said that blenders should be the obligated parties, instead of importers and refiners, because that would lead to more rapid investments in infrastructure for higher ethanol blends. According to some experts, when EPA designed the RFS, it placed the obligation for compliance on the relatively small number of refiners and importers rather than on the relatively large number of downstream blenders in order to minimize the number of obligated parties to be regulated and make the program easier to administer. However, these experts told us that obligating refiners and importers has not worked to incentivize investors to expand infrastructure for higher ethanol blends. Specifically, increasing consumer demand for biofuels—and the corresponding incentives to invest in biofuel infrastructure—requires the value of the RIN be “passed through” to consumers. More specifically, because the RIN that accompanies the gallon of biofuel has value for demonstrating compliance with the RFS or when sold in the market, it can be used to offset the higher cost of the biofuel and make it more competitive with petroleum-based fuels. By making biofuels more competitive, retailers are incentivized to build the infrastructure required to sell more of these fuels (i.e., higher ethanol blends). According to some experts and industry stakeholders, this pass-through has not been occurring as envisioned with refiners and importers as the obligated parties. One expert stated that, because blenders are either retailers or sell to retailers, blenders would be better situated to pass RIN savings along to consumers. This in turn might encourage demand for higher ethanol blends and incentivize infrastructure expansion. EPA officials told us they have received several petitions requesting that they consider changing the point of obligation and are evaluating those petitions. Several experts stated that the RFS is not the most efficient way to achieve the program’s goal of reducing greenhouse gas emissions, and they suggested policy alternatives—in particular, a carbon tax and a low carbon fuel standard (LCFS). Some experts stated that the design of the RFS may undermine its ability to achieve the greatest greenhouse gas emissions reductions. Specifically, some experts said that the RFS does not incentivize the production of advanced biofuels, which achieve the greatest greenhouse gas emission reductions. For example, a cellulosic fuel that reduces greenhouse gas emissions by 80 percent receives no more credit under the RFS than one that reduces greenhouse gas emissions by 60 percent, the baseline for the cellulosic category. As a result, fuels that may be slightly more costly to produce but achieve far greater greenhouse gas reductions may not be developed and brought to the market. Further, one expert stated that the RFS design creates a market rebound effect. That is, increasing the supply of biofuels tends to lower energy prices, which encourages additional fuel consumption that may actually result in increased greenhouse gas emissions. Several experts suggested that a carbon tax or an LCFS would be more efficient at reducing greenhouse gas emissions. Specifically, some experts said that, whereas the RFS creates disincentives for the production of cellulosic fuels that achieve the greatest greenhouse gas emission reductions, a carbon tax or LCFS would incentivize the technologies that achieve the greatest reductions in greenhouse gas emissions at the lowest cost. According to one expert, a carbon tax would eliminate the need for annual volume requirements and the accompanying program management and oversight. Under a carbon tax, each fossil fuel would be taxed in proportion to the amount of greenhouse gas (carbon dioxide) released in its combustion. In addition, one expert stated that a carbon tax is preferable to the RFS because it allows market effects to increase the price of emission-causing activities, which decreases demand for those activities. As a result, it could sustain consumers’ interest in fuel-saving vehicles and would result in a wide range of fuel-saving responses from all consumers (rather than just those purchasing a new vehicle). However, some experts also noted that a carbon tax would force further electrification for light-duty transportation because the electric power sector is the cheapest sector from which to obtain greenhouse gas reductions. According to one expert, this electrification of the light-duty fleet might further limit research and development of biofuels, in effect undermining the RFS goal to expand that sector. In light of this, several experts said that an LCFS would be more flexible and efficient than the RFS at developing biofuels that achieve the greatest greenhouse gas reductions. Specifically, an LCFS compares cost with greenhouse gas intensity (by accounting for carbon on a cost per unit of carbon intensity), thereby supporting incremental carbon reductions. An LCFS can be implemented in one of two ways. The first involves switching to direct fuel substitutes (e.g., drop-in fuels) or blending biofuels with lower greenhouse gas emissions directly into gasoline and diesel fuel. The second involves switching from petroleum-based fuels to other alternatives, such as natural gas, hydrogen, or electricity, because a low carbon fuel standard would allow a wider array of fuel pathways than the RFS. Under the first scenario, an LCFS would promote biofuel usage, rather than incentivizing electrification of the light-duty vehicle fleet. As a result, according to some experts, an LCFS is preferable to a carbon tax because it more efficiently reduces greenhouse gas emissions and promotes the expansion of the biofuel sector. However, other experts we spoke with critiqued an LCFS as being uneconomical. Specifically, one expert stated that, while an LCFS such as the one in California could force technology and create greenhouse gas reductions in the fuel market, the costs of implementing an LCFS are much higher than its benefits. We provided a draft of this product to EPA for comment. In its written comments, reproduced in appendix III, EPA generally concurred. EPA also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to the appropriate congressional committees and to the Administrator of the EPA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report were to provide information on (1) whether the Renewable Fuel Standard (RFS) is expected to meet its goals, (2) expert views on any federal actions that could improve the RFS framework, and (3) policy alternatives experts suggested to better meet the goals of the RFS in the future. To address our objectives, we contracted with the National Academy of Sciences to provide us with a list of experts on issues related to the RFS, including the current structure of the RFS; blending, distribution, and marketing infrastructure of biofuels; automobile manufacture; and petroleum consumption and prices. The National Academy of Sciences identified 25 experts, including experts from academia and policy think tanks and practitioners with relevant experience. Areas of expertise included policy analysis of the RFS, first-hand knowledge of the production and distribution of biofuels and flex fuel vehicles, and the economic and environmental ramifications of the RFS. We conducted semistructured interviews and performed a content analysis of the 24 experts’ responses to our questions. For reporting purposes, we categorized expert responses as follows: “nearly all” experts represents 21 to 23 experts, “most” experts represents 16 to 20 experts, “many” experts represents 11 to 15 experts, “several” experts represents 6 to 10 experts, and “some” experts represents 2 to 5 experts. See appendix II of this report for a list of experts whose names were provided by the National Academy of Sciences. We also reviewed public comments from stakeholders, relevant legislation, and agency documents pertaining to annual volume requirements (e.g., the Environmental Protection Agency’s (EPA) response to public comments) and conducted a literature search for research related to the RFS. In addition, we interviewed officials at EPA, the Department of Energy (DOE), and the Department of Agriculture (USDA). We also interviewed Congressional Research Service officials who have conducted extensive work on the RFS. To provide expert views on actions needed to address these challenges and meet the goals of the RFS in the future, we used our content analysis of the experts’ responses, which identified possible actions within the current RFS structure, changes to the RFS structure, and through policy alternatives to the RFS. Finally, this report drew from a companion report, GAO-17-108, that examined federal research and development in advanced biofuels and related issues. We conducted this performance audit from June 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. National Corn to Ethanol Research Center Automotive Fuels Consulting, Inc.; General Motors Research and Development Center (retired) University of Michigan Energy Institute Thorntons Inc. Ford Motor Company (retired) Frank Rusco, (202) 512-3841 or [email protected]. In addition to the individual named above, Karla Springer (Assistant Director), Jessica Artis, and Jarrod West made key contributions to this report. Luqman Abdullah, Richard Burkard, Cindy Gilbert, Robert Keane, Scott McClinton, Cynthia Norris, and Dan Royer also made important contributions. | The RFS generally mandates that domestic transportation fuels be blended with increasing volumes of biofuels through 2022, with the goals of reducing greenhouse gas emissions and expanding the nation's renewable fuels sector while reducing reliance on imported oil. Annual targets for the volumes of biofuels to be blended are set by statute. EPA oversees the program and is responsible for adjusting the statutory targets through 2022 to reflect expected U.S. industry production levels, among other factors, and for setting biofuel volume targets after 2022. Biofuels included in the RFS are conventional (primarily corn-starch ethanol) as well as various advanced biofuels (including cellulosic ethanol and biomass-based diesel). Advanced biofuels emit fewer greenhouse gases than petroleum and corn-starch ethanol. GAO was asked to review challenges to the RFS and their possible solutions. This report provides information on whether the RFS is expected to meet its goals, as well as expert views on any federal actions that could improve the RFS framework, among other things. GAO worked with the National Academy of Sciences to identify experts on issues related to the RFS. GAO interviewed these experts and analyzed their responses. This report also drew on published studies, and a companion report, GAO-17-108 , that examined federal research and development in advanced biofuels and related issues. EPA generally agreed with the report. It is unlikely that the goals of the Renewable Fuel Standard (RFS)—reduce greenhouse gas emissions and expand the nation's renewable fuels sector—will be met as envisioned because there is limited production of advanced biofuels to be blended into domestic transportation fuels and limited potential for expanded production by 2022. Advanced biofuels achieve greater greenhouse gas reductions than conventional (primarily corn-starch ethanol), while the latter accounts for most of the biofuel blended under the RFS. As a result, the RFS is unlikely to achieve the targeted level of greenhouse gas emissions reductions. For example, the cellulosic biofuel blended into the transportation fuel supply in 2015 was less than 5 percent of the statutory target of 3 billion gallons. In part as a result of low production, EPA has reduced the RFS targets for advanced biofuels through waivers in each of the last 4 years (see figure). According to experts GAO interviewed, the shortfall of advanced biofuels is the result of high production costs, and the investments in further research and development required to make these fuels more cost-competitive with petroleum-based fuels even in the longer run are unlikely in the current investment climate. Experts cited multiple federal actions that they suggested could incrementally improve the investment climate for advanced biofuels. For example, some experts told GAO that maintaining a consistent tax credit for biofuels, rather than allowing it to periodically lapse and be reinstated, could reduce uncertainty and encourage investment in advanced biofuels. |
As shore-based units located along the nation’s coasts and interior waterways, the Coast Guard’s 188 multimission stations conduct a wide range of operations, from rescuing mariners in distress to patrolling ports against acts of terrorism. The stations are involved in all Coast Guard programs, including search and rescue, port security, recreational and commercial fishing vessel safety, marine environmental response, and law enforcement (drug and migrant interdiction). Their involvement varies geographically from one Coast Guard district to the next, depending on differing conditions among regions. Some program operations also vary depending on the season—for example, search and rescue operations are greater in the summer when recreational boating is more active and lower in the winter. Because stations are traditionally associated with search and rescue operations, they can be compared to fire stations, in the sense that crew members remain at the station for extended periods, on duty, ready to respond to an emergency. Stations range in size from as few as 4 personnel at seasonal stations to as many as 60 personnel at larger stations. Individual stations are usually commanded by a command cadre consisting of an officer-in-charge—such as a senior chief petty officer—an executive petty officer, and an engineering petty officer. The command cadre is responsible for overseeing personnel, equipment, and mission-related issues. In support of operations, the stations also provide unit-level (on-the-job) training as well as equipment and minor boat maintenance. As shown in table 1, stations employ personnel in numerous occupations, but the principal staff usually consists of boatswain’s mates—those who operate the boats and carry out many station duties. In addition to performing essential station responsibilities, boatswains’ mates can undergo additional training for more advanced occupations, such as a coxswain (a boat driver) or a surfman (a coxswain who is qualified to operate boats in heavy weather and high surf conditions). Like the number of personnel, the number of boats at stations varies. Small, seasonal stations may have only one boat, while larger stations can have as many as nine. Table 2 describes the type of boats stations typically operate. (see app. II for pictures of selected boats). All station personnel are required to wear personal protection equipment (PPE), while operating or riding in a boat. Coast Guard personnel use PPE to protect against various dangers, such as inclement weather and cold water exposure. PPE includes items such as life vests, helmets, goggles, gloves, cold weather protection suits, thermal underwear, and electronic location devices. (See app. II for more information on the nature and use of PPE.) Following the events of September 11, the Coast Guard created a new program area for homeland security operations—the Ports, Waterways, and Coastal Security (PWCS) program. The type and frequency of PWCS activities performed by stations varies depending on whether a station is located in a port area or in a nonport area. Stations located in or near a port tend to perform more PWCS tasks, such as patrolling, escorting vessels, and other duties. The responsibilities can vary by port, however, depending on several factors, including the availability of other Coast Guard units to share in operations, the strategic importance of the port, and the support of non-Coast Guard entities—such as state and local agencies—in both homeland and nonhomeland security activities. Although stations located in nonport areas also conduct PWCS operations, such as patrolling waterways, they tend to have fewer PWCS responsibilities. In general, stations located in nonport areas do not have the responsibility of maintaining the security of critical infrastructure, high-profile vessels, or shore operations as do stations located in port areas. As tactical units, stations do not determine the nature or frequency of their tasks; rather, they carry out the tasks assigned to them by operational units, which provide oversight as well as operational and administrative support to the stations. In 2001, studies by the Office of Inspector General (OIG) and a Coast Guard internal review team found that readiness conditions at multimission stations had been deteriorating for over 20 years. The studies, which had largely consistent findings, identified readiness concerns in the areas of staffing, training, and boats and presented recommendations for addressing these concerns. Table 3 presents selected findings from these studies, as well as congressional concerns regarding station readiness. In December 2002, in response to a recommendation from the OIG and at the direction of the Senate Appropriations Committee, the Coast Guard developed a draft strategic plan to maintain and improve essential capability of all its boat force units, including stations. The plan recognized that stations did not have sufficient resources to be fully capable of meeting all their workload requirements and that it would take both increases in resources as well as “more judicious tasking by operational commanders” to address the imbalance. In its 2003 report on station operations, the OIG criticized the plan for being too general in nature, specifically regarding how and when the Cost Guard will increase staffing, training, equipment, and experience levels at stations. Added security responsibilities after the September 11 attacks had a definite—but as yet unmeasured—impact on stations’ readiness needs. Stations have seen a substantial increase in their security workload, along with a shifting of activity levels in other missions. The effect of these changes on readiness needs is still largely undetermined, mainly because the Coast Guard has not yet translated the security-related mission responsibilities into specific staffing standards and other requirements. Until it does so, the impact of increased responsibilities on readiness cannot be determined nor can the Coast Guard or others measure progress made in meeting station needs. With the support of state and local entities, stations and other units appear to be meeting the majority of port security responsibilities identified in the Coast Guard’s interim guidelines. After September 11, the Coast Guard’s multimission stations experienced a substantial rise in overall activity levels. Following the attacks, the Coast Guard elevated the priority of the homeland security program to a level commensurate with search and rescue, and according to field and headquarters officials, stations were assigned the brunt of the Coast Guard’s port security responsibilities. These responsibilities led to considerable increases in the stations’ security workloads. One way to see this change is in the number of hours that station boats were operated before and after September 11. Station boat hours increased by 44 percent from a level of about 217,000 hours prior to the terrorist attacks to more than 300,000 hours by the end of fiscal year 2004 (see fig. 1). Coast Guard officials explained that increases in boat hours were due to increased homeland security responsibilities and the 160 additional boats and personnel stations received from fiscal years 2002 to 2004. While total boat hours for stations increased following September 11, the trend among specific programs varied greatly, with some programs experiencing substantial increases and others experiencing declines (see fig. 2). Most notably, boat hours for the PWCS program increased by almost 1,900 percent between pre-September 11 levels and fiscal year 2004. Coast Guard officials attributed the increases in PWCS hours to (1) stations’ expanded homeland security responsibilities, (2) several elevations in the Maritime Security Condition (MARSEC) after 2001, and (3) acquisition of new boats and additional personnel stations received in fiscal years 2002 through 2004. Conversely, during the same period, hours dedicated to nonhomeland security programs decreased. For example, boat hours expended for search and rescue decreased by 15 percent, while hours for living marine resources decreased by 61 percent. Similar trends emerge in the limited data available about how Coast Guard personnel spend their time at stations. While the Coast Guard does not formally track the number of work hours station personnel spend on each program (either when operating a boat or while at the station), it does administer a survey each year to personnel at selected stations, asking them to estimate how they spent their time over an average week or week period in August. Survey results indicated that the number of hours spent on PWCS activities increased for those responding by about 29 percent between calendar years 2002 and 2003, while the number of hours spent on search and rescue activities decreased by about 12 percent. Coast Guard officials told us that although the stations’ workload has increased since September 11, mission performance has not suffered. This does not mean that they believe stations’ readiness needs were not affected by the increase in operations, only that stations have been able to sustain expected performance despite increased workloads. Officials responsible for overseeing operations at the stations we contacted explained that stations have been able to sustain performance levels by achieving greater efficiencies in operations, specifically by (1) conducting multiple missions during port security operations and (2) coordinating their efforts with state and local organizations. They also noted that stations have received additional boats and personnel since September 11. It is likely that other factors also play a role in this issue. Our prior work on the Coast Guard’s overall use of resources suggests an additional possible factor, such as decreases in search and rescue responsibilities over time. This work also showed that even in those program areas in which the number of boat hours declined following September 11, the Coast Guard was generally able to meet performance goals. The Coast Guard has not yet determined the extent to which changes in post-September 11 mission priorities—specifically, increases in homeland security responsibilities—have affected station readiness needs. Coast Guard officials told us there are two reasons for this. First, the Coast Guard’s maritime homeland security requirements are being revised to better align with current resource levels. The Coast Guard is currently working under Operation Neptune Shield, an interim set of guidelines that establishes Coast Guard’s homeland security activity levels—taskings— under each MARSEC level. However, because the guidelines call for a level of operations that exceeds the Coast Guard’s current resource levels, the Coast Guard is in the process of revising the guidelines. Officials told us they expect the new, long-term, risk-based requirements to establish more realistic activity levels that better align with existing resources and take into account support from state and local organizations at strategic ports. Officials told us they expect to have new activity level standards finalized by February 2005. Under these new standards—requirements—it is possible that station workload levels may change. Officials also told us that although the new requirements, known as the Strategic Deployment Plan, will better align security operations with existing resources, the Coast Guard will need to monitor this balance in the future given the dynamic nature of homeland security issues. Second, because homeland security requirements have yet to be finalized, the Coast Guard has begun, but not yet completed, efforts to update station staffing standards and other requirements to reflect post- September 11 changes in mission priorities and station readiness needs. Officials told us that once homeland security requirements have been finalized under the Strategic Deployment Plan, they will revise station staffing standards and other requirements to better reflect readiness needs. Although station staffing levels have been increased in response to the new homeland security priorities and past reports of staffing readiness concerns, the staffing standards are still based upon pre-September 11 mission priorities (i.e., search and rescue operational levels). Until the Coast Guard can translate the impact of security-related activities into specific station requirements, the impact of the new homeland security responsibilities on station readiness needs cannot be determined. Furthermore, without specific requirements, neither the Coast Guard nor others can measure the progress made in meeting station readiness needs. While the impact of new responsibilities on overall readiness needs remains unknown, there is evidence that most stations have been able to meet the port security responsibilities—i.e., activity levels—expected of them given their available resources and, in some cases, all security responsibilities with the help of other entities. Since the level of security activities established under the interim guidelines exceeds available resources, the Coast Guard has communicated to stations and other units that they are expected to carry out security operations within the constraints of existing resources. The Coast Guard does not track station- specific performance regarding port security responsibilities, but in 2003 it developed an unofficial evaluation system that indicates that current security responsibilities for major ports—for which stations bear significant responsibility—are largely being met. This evaluation system— referred to as the Scorecard system—captures activity levels for selected PWCS standards at ports of high military and economic significance. However, it is important to note that the Scorecard results are not station- specific in that (1) they do not separate tasks handled by stations from those of other entities (either Coast Guard or other) that address port security needs and (2) stations that do not contribute to port security are not included. Nonetheless, the Scorecard results do provide some indication that at least some stations are for the most part able to meet their current port security responsibilities. Furthermore, of the 16 stations we reviewed, officials from all but 1 told us that PWCS responsibilities—as identified by the interim guidelines—were being met. In most instances, stations reported meeting PWCS responsibilities with the assistance of state and local entities, which were either directly performing PWCS tasks or performing other mission responsibilities—such as fisheries enforcement or search and rescue—that allowed station personnel to focus on PWCS responsibilities. There are also clear signs that partnerships with other agencies and other Coast Guard units play an essential role in the stations’ ability to meet assigned homeland security tasks. Most of the officials responsible for overseeing operations at the 16 stations we reviewed told us that their stations have been able to meet increased operational responsibilities only by sharing overall tasks—nonhomeland security-related as well as homeland security-related—with state and local partners as well as other Coast Guard entities. Officials explained that they have developed two main types of partnerships. First, they have established partnerships with local organizations such as police, fire, and marine patrol units to conduct port security operations as well as nonhomeland security activities. Officials at the majority of stations we contacted told us that they rely on assistance from marine patrol units to conduct patrols of key infrastructure, such as harbor docks; officials from several stations also indicated that these units assist with vessel escorts. Officials also told us that stations rely on partner organizations to conduct nonhomeland security activities, such as search and rescue, and that expanded partnership efforts have resulted in operational efficiencies. For example, officials representing one station located at a major port told us that a local partner increased its search and rescue operations following September 11, allowing the Coast Guard station to focus more of its efforts on homeland security operations. Second, stations have relied on varying levels of support from other Coast Guard components, namely, Marine Safety Offices (MSO) and Maritime Safety and Security Teams (MSSTs), to conduct port security operations. Station officials we interviewed told us that the level of support provided by both components varied. For example, MSST support varied by geographic location. One official told us that certain MSSTs located on the Pacific Coast each set aside 5,000 hours a year to perform port security operations, while those on the East Coast do not. Headquarters officials told us that because the Coast Guard is still considering the role MSSTs will play in port security operations, the amount of support they provide to stations will vary. One senior headquarters official told us that newly established MSSTs can generally provide only a limited amount of support because of initial training requirements. As the MSSTs mature, they are usually able to assume greater responsibilities for port security operations. While stations’ efforts to leverage external support are commendable, the extent to which they can continue to rely on that support is unclear. To better understand the potential for support levels to change, we contacted a number of state and local organizations that partner with the stations we interviewed. We asked them if they expected their level of support would change in the future. Of the 13 organizations we contacted, officials from 12 organizations told us that since the September 11 attacks, they have either directly increased port security operations or increased other operations—such as search and rescue—that enable Coast Guard stations to focus on port security and other missions. One of the organizations we contacted explained that they had plans to decrease resource levels allocated to port security in future years. In addition, another 6 organizations we contacted emphasized that while they did not have plans in place to reduce funding levels for port security, they were not confident that future funding would continue at current levels. The Coast Guard has made progress in addressing multimission station readiness concerns identified prior to September 11. The Coast Guard has increased station staffing levels by 25 percent, expanded formal training programs and increased training capacity, begun modernizing its small boat fleet, and as of fiscal year 2003 appeared to have provided station personnel with appropriate amounts of PPE. However, despite this progress, the Coast Guard has yet to meet existing readiness standards and goals in the areas of staffing and boats and does not have adequate processes in place to help ensure the future funding of station PPE, a shortcoming that could result in an insufficient supply of PPE at stations in future years. Since 2001, the Coast Guard has developed a variety of initiatives aimed at resolving long-standing staffing concerns at multimission stations. Table 4 presents a selection of the initiatives, either planned or under way, that we identified as noteworthy in addressing station staffing needs. (App. II contains a more detailed description of the initiatives.) In addition to increasing the number of personnel and positions allotted to stations by 25 and 12 percent, respectively, the Coast Guard has begun to reconfigure aspects of the station staffing program to provide more effective operations. For example, in an effort to provide a more appropriate mix of positions and skills at stations and to address concerns about insufficient numbers of senior personnel, the Coast Guard added 99 support officer positions in fiscal years 2002 through 2004, and 486 senior petty officer positions in fiscal years 2002 and 2003. According to one official, the additional support positions will allow station command cadre to spend less time on administrative work and more time on operations. Recognizing that in the past a significant number of positions had been initially filled with unqualified personnel, the Coast Guard plans to take steps to base assignments on position requirements and experience. Furthermore, once long-term homeland security responsibilities have been determined, the Coast Guard plans to complete steps it has already begun to reconfigure its station staffing standards, which define the number and type of positions at stations based on mission requirements. The reconfiguration is expected to better align staffing resources with mission activities. Despite this progress, the Coast Guard has yet to meet five key standards and goals related to staffing at the stations. Each is discussed below. Most notably, despite increases in station staffing levels over the past 2 years and other actions, average station workweek hours continue to exceed, by significant levels, the 68-hour standard established by the Coast Guard in 1988 to limit fatigue and stress among station personnel. According to the Coast Guard’s Boat Forces Strategic Plan, excessive workweek hours is symptomatic of “the adverse operational trends, identified lack of resources, and general reduction in …readiness” experienced by stations in recent years. Moreover, the plan also notes that the high number of stations working in excess of 68 hours shows that “staffing continues to be a significant problem at stations.” According to estimates from Coast Guard surveys of station personnel, although the average work week at stations decreased somewhat between 1998 and 2003, since 1994 it has not dropped below 81 hours per week. It should be noted that these survey data, although the best source of information available on station workweek hours, may have limitations. That is, the survey is administered every August—during both the peak search and rescue season and the Coast Guard’s period for rotating personnel—and it may be that a year-round average, which would include off-peak, winter hours, would be lower. In addition, although response rates for every year were not readily available, the 2003 response rate was relatively low, with only a little over half of the personnel surveyed responding. An explanation of how workweek hours are measured may be helpful in interpreting this workweek information. The way in which the workweek is measured at stations is similar to the way it is measured in professions such as firefighting, in that personnel are on duty for an extended amount of time—such as 24 hours—to respond to emergencies but may spend part of it in recreation, sleep, exercise, training, or other activities. Personnel can thus be on duty or off duty for consecutive periods of time during a week. Workweek hours are calculated by totaling the amount of hours spent on duty or at a station, over a 1-week period, or averaging the amount of time spent on duty over a 2-week period. The Coast Guard’s 2003 survey of stations indicated that slightly less than half of all respondents reported working either an average 77- or 84-hour workweek. Approximately 6 percent of respondents reported working a 68-hour workweek. (See app. II for more information regarding these results.) According to the Coast Guard, working excessively long hours leads to injury and illness. Officials told us that station personnel can exceed the 68-hour work week standard in one of two ways. First, they can be assigned to a work schedule that averages to more than 68 hours a week, such as an 84-hour schedule. The work schedule, which is determined by the officer-in-charge, defines the number of days personnel spend on duty and is therefore the primary driver of whether personnel will consistently work an average of 68 hours per week or some number above that amount. There are advantages and disadvantages associated with each of the many possible schedules stations can adopt—table 5 shows a comparison of the 68- and 84-hour work schedules. For example, a potential disadvantage to having personnel work the 68-hour schedule is that it requires stations to retain more qualified personnel for duty work than the 84-hour schedule, which could be one reason why officers-in- charge who are short of qualified personnel would use the higher hour schedule. The 84-hour schedule, in contrast, requires smaller numbers of qualified personnel, which could be of benefit to stations with high workloads and too few qualified personnel. It could also be preferred by some personnel because it provides for 3-day weekends. However, a significant disadvantage to the 84-hour schedule, as noted in the station operations and training manual, is that it puts personnel “at significant risk of exceeding fatigue standards,” which is why it is normally restricted to stations with low numbers of response-driven cases. In other words, personnel at stations that have a greater number of response operations— such as rescue cases—are at a higher risk of exceeding fatigue standards because they are underway (that is, operating a boat) more often. The Coast Guard’s 1991 staffing study found that long hours on duty resulted in lost time among personnel because of illness and injury, as well as increased attrition levels. According to officials, in the late 1990s the Coast Guard switched from an 84-hour standard—which it had adopted to better meet significant staffing shortages—to the present 68-hour standard because of concerns about crew fatigue and an increasing number of boat accidents. A second way personnel can exceed the 68-hour schedule is by working overtime, which, if significant, can also lead to lost time due to illness and injury. Overtime generally occurs when required operations exceed the number of qualified, available personnel. As with the 84-hour work schedule, significant amounts of overtime can increase the likelihood that personnel will exceed Coast Guard fatigue standards and can lead to lower retention levels for trained personnel. Field and headquarters officials told us that at most stations, the high number of hours worked is being driven by the following factors: an increase in homeland security responsibilities; an increase in the number of inexperienced personnel; the formation of MSSTs, which siphoned experienced and qualified crewmembers from stations; and a lack of sufficient support for training, building and equipment maintenance, and administrative duties. Of these factors, the primary one is the increased homeland security role. One senior official told us that although increased staffing levels may have been sufficient to meet pre-September 11 mission needs, the homeland security mission has greatly expanded the stations’ workload, and it is unknown whether current staffing levels will be sufficient to meet operational requirements as well as the 68-hour work week standard. According to senior Coast Guard officials, it may take 5 to 10 years before the 68-hour standard is attained at all stations because of the high levels of inexperienced personnel and other issues previously discussed. In fact, one senior official questioned whether stations will ever reach the goal given competing Coast Guard priorities. Officials told us that the Coast Guard will not be able to determine optimum station staffing levels until (1) long-term homeland security requirements have been identified, (2) inexperienced staff have grown into senior positions, and (3) all new staffing initiatives (such as increased administrative support) have taken effect. Until these issues have been addressed, officials said it is likely that stations with high workloads and resource constraints will continue to work longer work weeks. The Office of Boat Forces’ targeted goal for senior personnel, as a percentage of total station personnel, is 50 percent; as of March 2004, senior staff comprised about 37 percent of total staff. Officials told us that in recent years stations have received a significant number of relatively inexperienced personnel, which has skewed staffing proportions, and that it will take at least 3 years to increase the number of senior personnel to desired levels. The Coast Guard has yet to meet its goal of an average 48-month station assignment (tour of duty) for experienced personnel. According to Coast Guard data, the average tour of duty length for boatswain’s mates increased from 33 months in 1999 to 35 months in 2000, but has remained fairly constant between 35 and 36 months through 2003. Although assignment practices have been modified for some personnel to allow for longer tour lengths at stations, officials told us that meeting the 48-month goal will be a challenge given that stations draw upon a pool of personnel they share with other units—such as ships—that generally require shorter tours of duty. Thus, significant changes to station assignment policies will affect these other units. One senior official told us that although the stations may not meet the 48-month goal in the foreseeable future, given competing personnel needs from other units, even extending the average tour length to 36 months would be a significant improvement. Increasing the tour length lessens the training burden on senior station personnel (i.e., they end up training fewer new personnel) and allows a station to reap the benefits of the training it has already invested in junior personnel. Stations continue to experience a shortage of qualified surfmen. As of June 2004, approximately 42 percent of surfman positions were not filled with qualified personnel. According to training and program officials, in the past this shortage stemmed from difficulties in recruiting sufficient numbers of applicants, because of several factors: (1) a long training process of 3 to 6 years (deters potential applicants); (2) higher workloads (because of the shortage of qualified personnel); (3) remote assignment locations (surfman stations are largely located in remote areas of the Northwest); (4) obstacles to promotion (promotion to senior enlisted ranks requires a year of sea duty, which surfmen, who are needed at stations, have difficulty obtaining); and (5) program challenges (make it difficult to retain bonus pay benefits). To address these challenges, the Coast Guard has revised relevant personnel policies to facilitate career advancement and to clarify the process for becoming a surfman. The Coast Guard has also revamped the surfman training program by developing courses that that address training needs. Regarding the former, the Coast Guard has (1) eased requirements for advancement (waived the requirement for 1 year of sea duty for promotion to senior positions) and (2) allowed surfmen to retain special pay status after they transfer to a new station and are in the process of certifying for their new area of operation. Beginning in May 2005, surfmen will also receive additional points on their service-wide exams, the scores of which are used to determine promotions. Regarding training, the Coast Guard has concentrated on a two-pronged approach. First, to increase the number of trainees who actually qualify as surfmen, in 2004 the Coast Guard implemented a new 2-year intensive training program that will require trainees to reside at the surfman training center; this will allow trainees to concentrate more fully on the training process. Officials told us they would not know the impact of this new training initiative until 2006, when the first class of resident trainees graduates. Second, to improve formal training for personnel who are qualifying through on-the-job training at their stations, in November 2004, the Coast Guard implemented a new 2-week surfman course. Because the course was designed in part to relieve the training burden on stations (which are short of qualified surfmen who can serve as instructors), trainees will complete most of their qualification requirements during training. Officials expected 18 students to take the course during the winter of 2004. The Coast Guard has not met its goal of aligning the number of individuals assigned to stations with the number of designated positions, making it unclear whether station staffing levels (individuals assigned to stations), which are currently greater than designated positions, will remain at current levels or decrease, potentially affecting station workweek hours and other issues. At the end of fiscal year 2004, the estimated number of personnel assigned to stations exceeded the number of Coast Guard- designated positions by an estimated 1,019 (or about 17 percent of total estimated personnel assigned to stations). Because the 1,019 personnel are not assigned to permanent positions, and thus their assignment is potentially more temporary than that of other personnel, the Coast Guard could not assure us that the estimated fiscal year 2004 station staffing level of 5,925 personnel will be maintained in the future. In contrast, the number of designated positions—4,906 at the end of fiscal year 2004—is considered permanent. Officials told us that although the Coast Guard’s goal is to align personnel and position levels, it has been necessary to assign a greater number of less experienced staff to the stations, above designated staffing levels, to develop required numbers of senior staff (officials estimate it takes three junior personnel to produce one senior crew member). Attrition patterns, limited space on ships, and the need to expose junior personnel to on-the-job-training are driving factors in this decision. The Coast Guard does not plan to add either additional personnel or positions to stations in fiscal year 2005; rather, it will use this time to evaluate current station resource levels and give junior personnel time to gain experience and become fully trained. It is unclear how this decision may affect staffing levels—if staffing levels drop, the number of hours station personnel work might increase. On the other hand, if lower staffing levels are accompanied by higher levels of experienced personnel, then workweek hours might be unaffected or even decline. The impact will also depend in part on other factors, such as the implementation of remaining staffing initiatives—such as the station staffing standards—and the nature of future homeland security responsibilities. Since 2001, the Coast Guard has made progress in developing a more formalized training program and in expanding the number of training slots available for the majority of station occupations. As late as August 2003, when we began our work, the Coast Guard had yet to determine whether formal training delivered in a classroom environment was preferable to on-the-job training—administered by senior personnel at stations—or whether it should utilize a combination of both on a long-term basis. Subsequently, the Coast Guard identified formal training as its preferred training method because, according to officials, it provides greater accountability for consistent and uniform training across all occupations and stations. To date, the Coast Guard has taken steps to formalize or augment several aspects of station training, including boatswain’s mate and boat driver training. With respect to boatswain’s mate training, in fiscal year 2002 the Coast Guard instituted a formal training center with the capacity to train 120 trainees per year; during fiscal year 2003 it more than tripled the center’s capacity to 450 training slots. Training officials told us they plan to further expand the capacity of the center each year through 2006, when its annual capacity is expected to reach 1,000 slots, the estimated number of new boatswain’s mates needed each year. Furthermore, in conjunction with efforts to modernize its boat fleet, in fiscal year 2003 the Coast Guard increased the amount of formal training available to boat drivers who were learning to operate new response boats and nonstandard boats. Training officials told us that they employed a two-pronged approach to train operators on the new boats using on-site training teams, which conduct training at the stations, and a new 2-week national training center course. Table 6 provides additional information regarding the Coast Guard’s initiatives to address training needs. (See app. II for a more detailed discussion of the Coast Guard’s planned and ongoing initiatives regarding station training needs.) According to officials, the Coast Guard’s efforts to train boat drivers were developed to address missions with greatest priority as well as for training needs for new boats going into service. For example, in 2003 and 2004, training teams were deployed to all strategic ports to provide training in tactical operations, an emerging requirement following the attacks of September 11. In addition, to address safety concerns associated with the operation of nonstandard boats, officials told us that training efforts were targeted at stations that would retain nonstandard boats, while the Coast Guard completes its program to replace stations’ nonstandard boats. Since 2001, the Coast Guard has made progress in restructuring the stations’ boat fleet to address safety and operational concerns resulting from aging and nonstandard boats. The Coast Guard has focused a major part of its efforts on replacing an assorted variety of nonstandard small boats with new, standardized boats. Officials told us that although the Coast Guard had just started planning the acquisition of the new boats in 2001, following the attacks of September 11, it expedited the purchase of 100 of the new boats to meet stations’ increased homeland security responsibilities. Approximately 50 of these boats were distributed to stations located at strategic ports, to provide quick response capabilities for port security operations, and to stations in critical need of new boat replacement. The remaining 50 boats were distributed to MSSTs. In 2003, the Coast Guard developed a multiyear contract to replace the remainder of stations’ aging and nonstandard boats with an estimated 350 new boats. The Coast Guard has also initiated efforts to replace the aging 41-foot utility boat fleet, which will reach the end of its 25-year service life beginning in 2005, with a medium-size utility boat. Officials told us that as of August 2003 the Coast Guard was in the process of reviewing three medium boat prototypes and that they expect to select a manufacturer in 2005. In addition, in 2003, the Coast Guard completed the replacement of the 30-year old 44-foot motor lifeboat with a fleet of new 47-foot motor life boats. Table 7 describes the Coast Guard’s actions to address concerns regarding the stations’ boat fleet. (See app. II for a more detailed discussion of ongoing and planned efforts for addressing station boat needs.) Despite this progress, the Coast Guard has yet to meet mission readiness goals for medium-sized boats (utility and motor life boats), as indicated by internal inspection results. The Office of Boat Forces’ goal is for 80 percent of boats inspected to meet readiness standards at the initial inspection. Results from the Coast Guard’s fiscal year 2003 inspections indicate that only 16 percent of the motor lifeboats and utility boats inspected met mission readiness goals when initially inspected. After a 1- day opportunity to correct identified problems, approximately 81 percent had met readiness goals. It is important to note that the majority of the discrepancies cited were not of such severity that they would prevent the boats from being used in most mission operations. For example, the failure of navigation lights on a 41-foot utility boat could preclude the boat from being operated (i.e., disable it) until the lights were fixed or the operational commander issued a waiver outlining the conditions under which the boat could be operated. Boats that receive waivers are considered able to “perform some missions, but not all missions safely.” In this scenario, the boat’s failure to meet full readiness standards would not necessarily affect the station’s operational readiness. Officials also told us that in order to ensure that operational readiness is maintained at stations when a boat is disabled, all multimission stations have more than one boat. Officials attributed the inability of stations to meet full readiness standards for boats to the following: Junior engineers did not have the necessary experience to perform maintenance in compliance with operating manuals and configuration updates from the manufacturer. Engineers lacked sufficient time to perform maintenance because of increased operational hours (that is, the boats are being used more often, and the engineers assigned to perform maintenance are spending more time conducting boat operations). Station utility boats were reaching the end of their service life and had deteriorated to the point that they required more maintenance to meet mission readiness standards. New boats are more technologically advanced (e.g., satellite navigation systems) requiring specialized technical training in order to perform maintenance. Officials told us that they are taking various steps to respond to these issues. First, to compensate for the lack of experience among junior engineers, the Coast Guard intends to intensify the training that is provided to engineers for the motor lifeboat. In November 2004, the Coast Guard implemented a 2-week course focused specifically on motor lifeboat operations and maintenance. The Coast Guard does not have plans, however, to provide specialized training for utility boat maintenance, given that these boats are at the end of their life span and will be replaced within a few years with new medium response boats. Likewise, the replacement of the utility boats will address low scores related to their deteriorating condition. Regarding the lack of time engineers have to perform maintenance, officials told us that this issue will be examined after the Coast Guard has reassessed changes in station workloads following September 11. As of the end of fiscal year 2003, station personnel appeared to have sufficient PPE, but the Coast Guard does not have adequate processes and practices in place to help prevent funding shortfalls from recurring. The Coast Guard’s continued use of these processes and practices in fiscal year 2004 resulted in a $1.9 million shortfall in estimated PPE funding needs for that year. As we discussed in our previous report, following the expenditure of an additional $5.6 million on PPE in fiscal year 2003 to address perceived shortfalls, active and reserve station personnel appeared to possess sufficient PPE. However, the Coast Guard’s processes and practices for estimating station PPE needs and allocating funds have historically resulted in an underfunding of station PPE, despite congressional direction to provide adequate supplies of PPE. If these funding practices are not modified, funding shortfalls could continue to occur in the future. Moreover, such funding shortages would affect the Coast Guard’s ability to meet one of its strategic objectives for stations— namely, to ensure that station personnel are properly outfitted with mission-specific equipment, such as PPE. A shortfall in estimated PPE funding needs of $1.9 million occurred in fiscal year 2004, although the actual impact of the shortfall—in terms of PPE that was needed but not purchased—is not known, since the purchase of PPE is not tracked at the headquarters level. A shortfall of $1.9 million is projected for fiscal year 2005. Following a mishap in 2001 in which the improper use of PPE was found to have contributed to the death of two station personnel, the adequacy of PPE took on added importance. The Coast Guard has emphasized the importance of PPE, both through a Commandant directive and in its policy manual, stating that the proper supplies and use of PPE is one of the top priorities of Coast Guard management. Although these measures are important, several aspects of the Coast Guard’s processes for estimating and allocating station PPE funds raise concerns. Appendix II discusses these concerns in detail, but they are summarized as follows: The Coast Guard’s forecasting models do not recognize PPE funding needs for personnel assigned to stations over and above the number of designated positions. This is because the forecasting models are predicated on the number of positions designated for stations, rather than the number of personnel assigned (in fiscal year 2004 the estimated number of personnel assigned to stations exceeded positions by an estimated 1,019). According to program officials, historically the amount of funds allotted by the Coast Guard each fiscal year for station PPE has not been sufficient to fund the estimated needs of all assigned station personnel. For example, in 2003, the OIG reported that the Coast Guard had not provided PPE funding for 541 (69 percent) of the 789 personnel it had added to stations during fiscal year 2002. Even when funding is narrowed to just designated positions, the Coast Guard’s traditional practice has been to fund only about half of PPE station needs, according to program officials. For example, in fiscal year 2003, the Coast Guard initially allocated $1.8 million, or 56 percent, of the estimated $3.2 million needed to provide PPE for personnel in designated station positions. Stations also receive general operating funds that may be used to purchase PPE, although these funds are also used for other purposes, such as boat maintenance. Assumptions used in PPE forecasting models have not been validated, according to officials. Without validated assumptions, the Coast Guard could be either underestimating or overestimating the life span and replacement cycle of the PPE. According to one official, the assumptions were based on input from station personnel. The Coast Guard does not require that PPE funds allocated to stations and oversight units actually be spent on PPE, according to program officials. Officials told us that in the interests of command flexibility units are allowed to spend allocated PPE funds on various operational expenses. Although such flexibility may be needed, the former PPE program manager told us that it was possible that oversight units have not been passing PPE funds on to stations as intended (i.e., the amount expended for PPE may be less than the amount allocated). To help address this possibility, the official said that in recent years he has disclosed to stations the total amount of PPE funding available to them, including funds held by oversight units. Officials told us the Coast Guard has no immediate plans to revise the traditional PPE funding allocation process because they believe it has been sufficiently reliable for the agency’s purposes. However, the process may change once long-term homeland security requirements have been identified. Given the historic shortages in PPE funding that have resulted from the Coast Guard’s allocation processes, as witnessed by the onetime increase in funding during fiscal year 2003, it seems likely that stations will experience shortfalls in the future if PPE allocation processes and practices are not adjusted. The Coast Guard Guard’s Boat Forces Strategic Plan, its 10-year plan for maintaining and improving readiness at stations and other boat units, lacks key components for achieving and assessing station readiness in the post-September 11 operating environment. The plan, although extensive, has not been updated to include the impact of post-September 11 homeland security requirements on station operations. The plan also does not identify the specific actions, milestones, and funding amounts needed to assure Congress and others that the Coast Guard is committed to achieving identified readiness levels. Moreover, the Coast Guard has yet to develop measurable annual goals for stations that would (1) allow it to track its progress in achieving long-term goals and objectives, (2) allow others to effectively monitor and measure progress, and (3) provide accountability. Without these key planning elements, the strategic plan’s effectiveness as a management tool, as well as the Coast Guard’s ability to ensure desired progress in meeting station readiness needs, is limited. The Boat Forces Strategic Plan identifies strategic goals in four areas: (1) leadership and management; (2) personnel and staffing; (3) training and expertise; and (4) equipment, support, and technology. These goals are supported by objectives and initiatives, the latter of which are prioritized, by fiscal year, in a summary implementation plan. Table 8 presents examples of goals, objectives, and initiatives, as well as the targeted time frames, pertaining to each of the four readiness categories (staffing, training, boats, and PPE). The initiatives contained in the plan are designed to address station readiness concerns identified in 2001. For example, to address the concern that stations’ have not received the appropriate number of positions or qualified personnel, the plan contains an initiative to revise station staffing standards as well as an initiative regarding the need to staff stations according to these revised standards. Although the strategic plan provides an indication of what overall measures may be needed to restore station readiness, the Coast Guard has not developed key planning elements in four areas—either pertaining to the strategic plan or related to it—that are essential to setting clear expectations about what will be achieved, and translating the expectations into specific funding needs. In four key areas, as discussed later, the Coast Guard does not follow practices that we and others have identified as necessary to effectively measure performance and hold agencies accountable for results. Plan not updated to reflect homeland security responsibilities: The plan has not been updated to reflect the impact of post-September 11 homeland security requirements on stations. Although it incorporates performance goals for other Coast Guard programs, the plan does not incorporate—because they have yet to be finalized—goals and requirements for the PWCS program, a major driver of station operations. Until those requirements have been developed, the capability and resources stations will need to address one of their most significant operational responsibilities, and hence overall readiness needs, cannot be fully determined. For example, the plan cites two important initiatives in the category of staffing—(1) revise station staffing standards (i.e., determine the optimal number and type of personnel needed at each station) and (2) staff stations according to those standards. According to the plan’s implementation summary, these standards were to be revised in fiscal year 2003 and station staffing completed by fiscal year 2007. As of January 2005, officials had yet to revise the standards because long-term homeland security responsibilities had not been finalized. Until the staffing standards have been revised, the bigger picture—when stations staffing needs will be met—cannot be determined. Plan contains insufficient details on specific planned actions and milestones: The plan does not identify, in sufficient detail, planned actions and milestones. Effective strategic plans should show an obvious link between objectives and the specific actions that will be needed to meet those objectives. These actions, in turn, should be clearly linked to milestones. Although the plan includes a summary implementation schedule, it does not clearly identify what steps will be taken to implement the initiatives and when they will be completed. For example, the plan does not identify what actions will be needed to ensure that station personnel are placed in positions that are appropriate for their experience, or to increase the actual length of time personnel are assigned to stations. Plan’s objectives not linked with budget: The plan lacks a clear link between objectives and required funding levels. Without a clear understanding of the funding needed each year, there is little assurance that initiatives will be implemented and long-term objectives realized. Clearly identifying funding needs would also help to ensure that projected goals and objectives are commensurate with available resource levels. Because the plan does not identify the funding needed to carry out key objectives and initiatives, even in the short term, it is unclear whether the Coast Guard will be able to fund initiatives according to proposed time frames. Lack of measurable annual goals: The Coast Guard has not established measurable annual goals linked to the long-term goals identified in the strategic plan. Without annual goals, Congress, the Coast Guard, and others cannot effectively and readily measure an agency’s progress in meeting its long-term goals. The above planning elements would help the Coast Guard and Congress use the strategic plan as a more effective tool for monitoring station readiness needs and identifying areas of continuing concern. As with any strategic plan, Coast Guard officials agree that it will also need to be revisited and revised to keep pace with changing events and, thus, the plan will be reviewed on an annual basis. The Coast Guard’s plan may be particularly susceptible to changing circumstances, given that it must deal with so many unpredictable events, ranging from natural disasters and accidents to the uncertainties of terrorist threats. A senior headquarters official told us that while the Coast Guard should more clearly identify expectations regarding annual goals, continually changing priorities often make it difficult for the agency to adhere to—and fund—long-term strategic plans and in some cases to even maintain program consistency. We acknowledge that these difficulties exist and must be considered in developing and using the plan, but even with these difficulties, incorporating the key elements discussed above would improve the plan’s effectiveness as a management tool. It has been 3 years since the September 11 terrorist attacks changed the mission priorities for multimission stations, and there is no doubt that station readiness requirements need to be updated to reflect this new reality. The Coast Guard’s decision to hold off updating these requirements until they can be aligned with homeland security responsibilities Coast Guard-wide is sensible. The readiness of multimission stations is but one of the many competing demands the Coast Guard must balance as it attempts to meet increased homeland security responsibilities while continuing to support other missions. Nonetheless, the Coast Guard still needs to have the necessary plans, processes, and safeguards in place to help ensure that it can continue the impressive progress made thus far in addressing 20 years of operational deterioration at stations. In particular, indications of high workweek hours for many personnel and inadequate processes and practices used to estimate and fund needs for personal protection equipment may limit the stations’ readiness. Historic shortages in station PPE funding allocations, which continued in fiscal years 2004 and are estimated for 2005, indicate that stations will continue to experience funding shortfalls in the future unless PPE allocation processes and practices are adjusted. Perhaps more significantly, it remains unclear where station readiness falls in the Coast Guard’s list of priorities, as evidenced by a lack of measurable annual goals related to stations and the lack of detail—in terms of both specific actions as well as necessary funding—in the Boat Forces Strategic Plan, the Coast Guard’s strategy for addressing station readiness issues. The lack of specificity on the Coast Guard’s part thus far is perhaps understandable given the challenges it has faced in the wake of September 11. However, continuing in this way will make it difficult to know what the Coast Guard intends as a readiness baseline, how close or far away it is from achieving this level, and what it thinks will be needed to get there. To help ensure that the Coast Guard and Congress have the information necessary to effectively assess station readiness needs and track progress in meeting those needs, and that multimission station personnel receive sufficient personal protection equipment to perform essential and hazardous missions as specified by Congress, we recommend that the Secretary of Homeland Security, in consideration of any revised homeland security requirements, direct the Commandant of the Coast Guard to take the following three actions: Revise the Boat Forces Strategic Plan to (1) reflect the impact of homeland security requirements on station needs and (2) identify specific actions, milestones, and funding needs for meeting those needs. Develop measurable annual goals for stations. Revise the processes and practices for estimating and allocating station PPE funds to reliably identify annual funding needs and use this information in making future funding decisions. We provided a draft of this report to the Department of Homeland Security and the Coast Guard for their review and comment. The Department of Homeland Security and the Coast Guard generally concurred with our findings and recommendations and did not provide formal comments for inclusion in the final report. The Coast Guard, however, provided technical clarifications as well as suggested contextual adjustments, which we incorporated to ensure the accuracy of the report. We are sending copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. If you or your staffs have any questions about this report, please contact me at (415) 904-2200 or Steven N. Calvo at (206) 287-4839. Key contributors to this report are listed in appendix III. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. To examine the extent to which multimission station readiness needs changed as a result of post-September 11 changes in mission priorities, we reviewed relevant Coast Guard documents, including Operation Neptune Shield, the agency’s interim guidelines for implementing homeland security operations; the Maritime Strategy for Homeland Security; and mission planning guidance used to establish fiscal year 2004 mission priorities. We also reviewed our previous work on the Coast Guard’s efforts to balance its homeland security and nonhomeland security missions. In addition, we interviewed headquarters officials regarding trends in station operations and the agency’s plans for addressing the homeland security mission. To better understand how station performance was affected by changes in mission priorities, we reviewed data from the Coast Guard’s Scorecard System, its unofficial process for monitoring security operations at strategic ports. We interviewed officials responsible for analyzing and compiling these data at the field and headquarters levels and determined that the data were sufficiently reliable for the purposes of this report, given the parameters of the system. We also reviewed boat hour data from the Coast Guard’s Abstract of Operations database to determine trends in the number of hours station boats were operated, by program, both before and after September 11. Boat hour data, reported by station crews, represent the number of hours that boats were operated by station personnel. To develop a more representative estimate of pre- September 11 boat hours and to normalize for fluctuations in hours that might occur in a single year, we averaged the number of boat hours expended during fiscal years 1999 and 2000 to create a pre-September 11 baseline. The Coast Guard agreed with our use of this 2-year average as an appropriate baseline of pre-September 11 boat hours. To determine the reliability of the data, we used assessments from our previous report, which consisted of (1) a review of existing documentation regarding the data and the systems that produced them and (2) interviews with knowledgeable agency officials. On the basis of these assessments, we determined that the data were sufficiently reliable for the purposes of this report. We visited 8 multimission stations on the Pacific and Atlantic coasts, as well as the four groups and activities responsible for overseeing their operations, to better understand how stations’ readiness needs changed following September 11. These stations were selected on the basis of geographic location, proximity to a strategic port, and the number of boat hours expended in fiscal year 2003 on homeland security, search and rescue, and law enforcement operations. To further explore how increased homeland security operations had affected stations, we conducted telephone interviews with field officials responsible for overseeing operations at 8 additional stations located at strategic ports. We selected these additional 8 stations based on the number of station resource hours expended on port security operations. To assess the levels of support provided to stations by state and local organizations, we contacted 13 organizations identified as key partners for 8 of the stations we reviewed. To address multimission station readiness concerns identified prior to September 11, we reviewed the Department of Transportation’s Office of Inspector General (OIG) reports on station readiness, the Coast Guard’s internal review of station operations, and various congressional reports. We also spoke with Coast Guard headquarters officials from the Offices of Boat Forces; Budget and Programs; and Workforce Performance, Training, and Development. To identify actions the Coast Guard has taken or is planning to take regarding station readiness concerns, we reviewed available Coast Guard data regarding station staffing, training, boats, and personal protection equipment (PPE). In the areas of staffing and PPE, we used data from our May 2004 report—such as the number of staff and positions added to stations in fiscal year 2003 and the estimated amount of funds expended on station PPE in fiscal year 2003—which we had determined were sufficiently reliable for reporting purposes. To assess the reliability of training and boat data, we (1) reviewed existing documentation regarding the data and how they were developed and (2) interviewed knowledgeable agency officials. We determined that the data were sufficiently reliable for the purposes of this report. Furthermore, we interviewed officials responsible for overseeing operations at the 16 stations we reviewed, as well as at relevant oversight units. To review the Coast Guard’s training programs and identify progress made in expanding formal training opportunities for station personnel, we visited the following Coast Guard training centers: the Motor Life Boat School; the Boatswain’s Mate School, the Boat Forces Center, the Boat Engineering School; and the Maritime Law Enforcement Center. In addition, we interviewed officials from the Coast Guard’s internal inspection teams to discuss biennial inspections of station operations and reviewed station inspection results for fiscal years 2002 and 2003. To obtain a better understanding of personnel workloads and how those workloads may be changing, we also reviewed fiscal years 2002 and 2003 survey results from the Coast Guard’s annual survey of station personnel activities. With some exceptions, the Coast Guard has conducted an annual survey of station workweek hours since 1991. The Coast Guard uses the survey results to gauge changes in the average number of hours personnel work each week, and thus have not validated the survey. Personnel are asked to report the number of hours spent among 49 predefined activities during an average workweek in August. In 2003, personnel at 77 of the 188 stations were surveyed, with personnel from 64 stations responding, for a total response rate of 54 percent of all personnel surveyed. (Response rate data were not readily available for other survey years.) One possible reason for this low response rate may be that nonrespondents did not have time to complete the survey, which could mean that workweek hours were under-reported. However, it is also possible that personnel who were working longer hours per week were more inclined to report that condition, leading to an over-reporting of workweek hours. To assess the reliability of the data, we interviewed the headquarters officials who oversee the survey as well as available documentation. Recognizing the limitations of the data—such as the low response rate—we determined that the data were sufficiently reliable for the purposes of this report. That is, as an indicator of average station workweek hours, and general trends in those hours over time, the survey results are sufficiently reliable. To assess the extent to which Coast Guard’s plans address station readiness needs, we reviewed the Coast Guard’s Boat Forces Strategic Plan, the agency’s strategy for maintaining and improving essential operations capabilities for all boat units, including multimission stations. We also reviewed the Department of Transportation OIG’s assessment of the draft plan. To identify practices for effectively measuring program performance, we reviewed the Government Performance and Results Act (GPRA) of 1993, our prior work on results-oriented management, and Office of Management and Budget circulars. We conducted our work between September 2003 and December 2004 in accordance with generally accepted government auditing standards. This appendix presents additional information regarding the four categories of multimission station operations we reviewed—staffing, training, boats, and PPE. The appendix also contains additional information regarding the initiatives the Coast Guard has either started or plans to develop to address concerns in each of these categories. The Coast Guard has initiated multiple efforts to address staffing concerns at multimission stations. This section provides additional information regarding Coast Guard (1) station survey results regarding workweek hours and (2) initiatives to address staffing concerns. Approximately 44 percent of the individuals who responded to the Coast Guard’s 2003 station staffing survey indicated that they worked an average workweek that was in excess of the 68-hour standard (see table 9). Approximately 6 percent of those questioned indicated they worked a standard 68-hour work week. Table 10 presents additional information on initiatives the Coast Guard has under way or plans to develop with regard to station staffing needs. Since 2001, the Coast Guard has taken steps to increase training capacity at national training centers. Two efforts to expand formal training slots have been the reinstituting of the boatswain’s mate training center and the implementation of response boat training courses. Additional information regarding training efforts are summarized in table 11. The Coast Guard has also taken steps to improve the way it evaluates job performance for station personnel. To monitor knowledge levels and provide insight on areas of training that may need improvement, every 2 years station personnel are tested on requisite areas of job performance. In an effort to improve how the examinations measure station personnel knowledge levels, the Coast Guard employed professional test writers in 2004 to revise the examinations. Over the past few years test scores for station personnel have shown mixed trends. For example, between fiscal years 2002 and 2003 assessment results for boat drivers and motor lifeboat engineers improved somewhat, while results for crew members (boat personnel other than the boat driver or engineer) and for utility boat engineers decreased slightly. Coast Guard officials told us that they are exploring the reasons for these results, but in general, increases in test results can be attributed to improvements made in on-the-job training. Conversely, decreases can be attributed to a continued lack of experience on the part of junior personnel, the number of which have increased in the past few years, and to high levels of personnel turnover at stations. Multimission station personnel use a variety of boats to support operations. Figures 3 and 4 illustrate three of the primary boats used by station personnel. The Coast Guard has made progress in replacing nonstandard and aging boats to support station operations. Table 12 presents additional information regarding the Coast Guard’s initiatives to modernize stations’ boat fleet. As we previously reported, anecdotal and quantitative data indicate that as of fiscal year 2003 active and reserve station personnel possessed sufficient levels of PPE. During fiscal year 2003, the Coast Guard spent $7.5 million to address PPE shortfalls for station personnel, of which $5.6 million came from specially designated funds. In fiscal year 2003, the cost of a total basic PPE outfit was $1,296. The cost of a cold weather PPE outfit, which is used by personnel working at stations where the outdoor temperature falls below 50 degrees Fahrenheit, was $1,431. (Fig. 5 shows a station crew member in cold weather PPE.) According to the Coast Guard, personnel at 135 (72 percent) of the 188 multimission stations require cold weather PPE in addition to basic PPE. Despite indications that the Coast Guard had met its goal in fiscal year 2003 of providing sufficient amounts of PPE to all active duty and reserve station personnel, concerns remain as to whether the Coast Guard will provide sufficient funding for station PPE in the future. The Coast Guard’s processes for estimating station PPE needs and allocating funds have historically resulted in an under funding of station PPE. See table 13 for concerns regarding the Coast Guard’s processes and practices for allocating funds for station PPE. In addition to those named above, Randy B. Williamson, Barbara A. Guffy, Joel Aldape, Marisela Perez, Stan G. Stenersen, Dorian R. Dunbar, Ben Atwater, Michele C. Fejfar, Elizabeth H. Curda, and Ann H. Finley made key contributions to this report. | For years, the Coast Guard has conducted search and rescue operations from its network of stations along the nation's coasts and waterways. In 2001, reviews of station operations found that station readiness--the ability to execute mission requirements in keeping with standards--was in decline. The Coast Guard began addressing these issues, only to see its efforts complicated by expanded post-September 11, 2001, homeland security responsibilities at many stations. GAO reviewed the impact of changing missions on station needs, the progress made in addressing station readiness needs, and the extent to which plans are in place for addressing any remaining needs. The Coast Guard does not yet know the extent to which station readiness needs have been affected by post-September 11 changes in mission priorities, although increases in homeland security operations have clearly affected activities and presumably affected readiness needs as well. Following the attacks, stations in and near ports received the bulk of port security duties, creating substantial increases in workloads. The Coast Guard is still in the process of defining long-term activity levels for homeland security and has yet to convert the homeland security mission into specific station readiness requirements. Until it does so, the impact of these new duties on readiness needs cannot be determined. The Coast Guard says it will revise readiness requirements after security activity levels have been finalized. Increased staffing, more training, new boats, more personal protection equipment (such as life vests), and other changes have helped mitigate many long-standing station readiness concerns. However, stations have been unable to meet current Coast Guard standards and goals in the areas of staffing and boats, an indication that stations are still significantly short of desired readiness levels in these areas. Also, because Coast Guard funding practices for personal protection equipment have not changed, stations may have insufficient funding for such equipment in the future. The Coast Guard does not have an adequate plan in place for addressing remaining readiness needs. The Coast Guard's strategic plan for these stations has not been updated to reflect increased security responsibilities, and the agency lacks specific planned actions and milestones. Moreover, the Coast Guard has yet to develop measurable annual goals that would allow the agency and others to track stations' progress. |
Today we are at a key crossroad. In the next few decades, the nation will be struggling with a large and growing structural deficit. At the same time, however, weapons programs are commanding larger budgets as DOD undertakes increasingly ambitious efforts to transform its ability to address current and potential future conflicts. These costly current and planned acquisitions are running head-on into the nation’s unsustainable fiscal path. In the past 5 years, DOD has doubled its planned investments in weapons systems, but this huge increase has not been accompanied by more stability, better outcomes, or more buying power for the acquisition dollar. Rather than showing appreciable improvement, programs are experiencing recurring problems with cost overruns, missed deadlines, and performance shortfalls. As I have testified previously, our nation is on an imprudent and unsustainable fiscal path. Budget simulations by GAO, the Congressional Budget Office, and others show that, over the long term, we face a large and growing structural deficit due primarily to known demographic trends, rising health care costs, and lower federal revenues as a percentage of the economy. Continuing on this path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Federal discretionary spending, along with other federal policies and programs, will face serious budget pressures in the coming years stemming from new budgetary demands and demographic trends. Defense spending falls within the discretionary spending accounts. Further, current military operations, such as those in Afghanistan and Iraq, consume a large share of DOD budgets and are causing faster wear on existing weapons. Refurbishment or replacement sooner than planned is putting further pressure on DOD’s investment accounts. It is within this context that we must engage in a comprehensive and fundamental reexamination of new and ongoing investments in our nation’s weapons systems. Weapons systems are one of the single largest investments the federal government makes. In the last 5 years, DOD has doubled its planned investments in new systems from about $700 billion in 2001 to nearly $1.4 trillion in 2006. Annual procurement totals are expected by DOD to increase from about $75 billion to about $100 billion during 2006 to 2011. At the same time DOD is facing future budget constraints, programs are seeking larger budgets. To illustrate, the projected cost of DOD’s top five programs in fiscal year 2001 was about $291 billion. In 2006, it was $550 billion. A primary reason why budgets are growing is that DOD is undertaking new efforts that are expected to be the most expensive and complex ever. Moreover, it is counting on these efforts to enable transformation of military operations. The Army, for example, is undertaking the Future Combat Systems (FCS) program in order to enable its combat force to become lighter, more agile, and more capable. FCS is comprised of a family of weapons, including 18 manned and unmanned ground vehicles, air vehicles, sensors, and munitions, which will be linked by an information network. These vehicles, weapons, and equipment will comprise the majority of the equipment needed for a brigade combat team in the future. When considering complementary programs, projected investment costs for FCS are estimated on the order of $200 billion. Affordability of the FCS programs depends on two key assumptions. First, the program must proceed without exceeding its currently projected costs. Second, FCS has expected large annual procurement costs beginning in 2012. FCS procurement will represent 60 to 70 percent of Army procurement from fiscal years 2014 to 2022. As the Army prepares the next Defense Plan, it will face the challenge of allocating sufficient funding to meet increasing needs for FCS procurement in fiscal years 2012 and 2013. If all the needed funding cannot be identified, the Army will have to consider reducing the FCS procurement rate or delaying or reducing items to be spun out to current Army forces. At the same time, the Air Force is undertaking two new satellite programs that are expected to play a major role in enabling FCS and other future systems. The Transformational Satellite Communications System, which is to serve as a linchpin in DOD’s future communications network, and Space Radar, which is focused on generating volumes of radar imagery data for transmission to ground-, air-, ship-, and space-based systems. Together, these systems are expected to cost more than $40 billion. The Department has also been focused on modernizing its tactical aircraft fleet. These efforts include the Joint Strike Fighter (JSF) aircraft program, currently expected to cost more than $200 billion, and the Air Force’s F-22A Raptor aircraft, expected to cost more than $65 billion. Concurrently, the Navy is focused on acquiring new ships and submarines with significantly advanced designs and technologies. These include the Virginia Class Submarine, expected to cost about $80 billion, and the DDG- 51 class destroyer ship, expected to cost some $70 billion, and the newer DD(X) destroyer program, which is focused on providing advanced land attack capability in support of forces ashore and to contribute to U.S. military dominance in the shallow coastal water environment. The Navy shipbuilding plan requires more funds than may reasonably be expected. Specifically, the plan projects a supply of shipbuilding funds that will double by 2011 and will stay at high levels for years to follow. Despite doubling its investment the past 5 years, our assessments do not show appreciable improvement in DOD’s management of the acquisition of major weapons systems. A large number of the programs included in our annual assessment of weapons systems are costing more and taking longer to develop than estimated. It is not unusual to see development cost increases between 30 percent and 40 percent and schedule delays of approximately 1, 2 or more years. The consequence of cost and cycle-time growth is manifested in a reduction of buying power of the defense dollar—causing programs to either cut back on planned quantities, capabilities, or to even scrap multi- billion dollar programs, after years of effort, in favor of pursing more promising alternatives. Figure 1 illustrates seven programs with a significant reduction in buying power; we have reported similar outcomes in many more programs. This is not to say that the nation does not get superior weapons in the end, but that at currently projected twice the level of investment, DOD has an obligation to get better results. Furthermore, the conventional acquisition process is not agile enough to meet today’s demands. Congress has expressed concern that urgent warfighting requirements are not being met in the most expeditious manner and has put in place several authorities for rapid acquisition to work around the process. The U.S. Joint Forces Command’s Limited Acquisition Authority and the Secretary of Defense’s Rapid Acquisition Authority seek to get warfighting capability to the field quicker. According to U.S. Joint Forces Command officials, it is only through Limited Acquisition Authority that the command has had the authority to satisfy the unanticipated, unbudgeted, urgent mission needs of other combatant commands. With a formal process that requires as many as 5, 10, or 15 years to get from program start to production, such experiments are needed to meet the warfighters’ needs. Our reviews have identified a number of causes behind the problems just described, but several stand out. First, DOD starts more weapons programs than it can afford and sustain, creating a competition for funding that encourages low cost estimating, optimistic scheduling, over promising, and suppressing of bad news. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD and the Congress are forced to continually shift funds to and from programs—undermining well-performing programs to pay for poorly performing ones. Adding pressure to this environment are changes that have occurred within the defense supplier base. Twenty years ago, there were more than 20 fully competent prime contractors competing for multiple new programs annually; today, there are only 6 that compete for considerably fewer programs, according to a recent DOD- commissioned study. This adds pressure on DOD to keep current suppliers in business and limits DOD’s ability to maximize competition. Second, DOD has exacerbated this problem by not clearly defining and stabilizing requirements before programs are started. At times, in fact, it has allowed new requirements to be added well into acquisition cycle— significantly stretching technology and creating design challenges, and exacerbating budget overruns. For example, in the F-22A program, the Air Force added a requirement for air-to-ground attack capability. In its Global Hawk program, the Air Force added both signals intelligence and imagery intelligence requirements. While experience would caution DOD not to pile on new requirements, customers often demand them fearing there may not be another chance to get new capabilities since programs can take a decade or longer to complete. Yet, perversely, such strategies delay delivery to the warfighter, oftentimes by years. Third, DOD commits to its programs before it obtains assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. Funding processes encourage this approach, since acquisition programs attract more dollars than efforts concentrating solely on proving out technologies. Nevertheless, when DOD chooses to extend technology invention into acquisition, programs experience technical problems that have reverberating effects and require large amounts of time and money to fix. When programs have a large number of interdependencies, even minor technical “glitches” can cause disruptions. Only 10 percent of the programs in our latest annual assessment of weapons systems had demonstrated critical technologies to best practice standards at the start of development; and only 23 percent demonstrated them to DOD’s standards. The cost effect of proceeding without completing technology development before starting an acquisition can be dramatic. For example, research, development, test and evaluation costs for the programs included in our review that met best practice standards at program start increased by a modest average of 4.8 percent over the first full estimate, whereas the costs for the programs that did not meet these standards increased by a much higher average of 34.9 percent over the first full estimate. Fourth, officials are rarely held accountable when programs go astray. There are several reasons for this, but the primary ones include the fact that DOD has never clearly specified who is accountable for what, invested responsibility for execution in any single individual, or even required program leaders to stay until the job is done. Moreover, program managers are not empowered to make go or no-go decisions, they have little control over funding, they cannot veto new requirements, and they have little authority over staffing. Because there is frequent turnover in their positions, program managers also sometimes find themselves in the position of having to take on efforts that are already significantly flawed. Likewise, contractors are not always held accountable when they fail to achieve desired acquisition outcomes. In a recent study, for example, we found that DOD had paid out an estimated $8 billion in award fees on contracts in our study population regardless of outcomes. In one instance, we found that DOD paid its contractor for a satellite program—the Space- Based Infrared System High—74 percent of the award fee available, or $160 million, even though research and development costs increased by more than 99 percent, the program was delayed for many years and was rebaselined three times. In another instance, DOD paid its contractor for the F-22A aircraft more than $848 million, 91 percent of the available award fee, even though research and development costs increased by more than 47 percent, the program has been rebaselined 14 times, and delayed by more than 2 years. Fifth, these strategies work, because they win dollars. DOD and congressional funding approval reinforces these practices and serves to undercut reform efforts. Stated differently, typically no one is held accountable for unacceptable outcomes and there are little or no adverse consequences for the responsible parties. This is a shared responsibility of both the executive and legislative branches of government. Of course, there are many other factors that play a role in causing weapons programs to go astray. They include workforce challenges, poor contractor oversight, frequent turnover in key leadership, and a lack of systems engineering, among others. Moreover, many of the business processes that support weapons development—strategic planning and budgeting, human capital management, infrastructure, financial management, information technology, and contracting—are beset with pervasive, decades-old management problems, including outdated organizational structures, systems, and processes. In fact, these areas— along with weapons system acquisitions—are on GAO’s high risk list of major government programs and operations. DOD has long recognized such problems and initiated numerous improvement efforts. In fact, since 1949, more than 10 commissions have studied issues such as long cycle time and cost increases as well as deficiencies in the acquisition workforce. This committee just last week heard testimony regarding several of them. Among these recent studies, there is a consensus that DOD needs to instill much stronger discipline into the requirements setting process, prioritize its investments, seek additional experienced and capable managers, control costs, strengthen accountability, and enhance the basis for enterprise-wide decision making. In response to past studies and recommendations, including our own, DOD has taken a number of acquisition reforms. Specifically, DOD has restructured its acquisition policy to incorporate best practices as the suggested way of doing business. For example, policies embrace the concept of closing gaps between requirements and resources before launching new programs. DOD is also reviewing changes to requirements setting. DOD has also strengthened training for program managers, required the use of independent cost estimating, reemphasized the discipline of systems engineering, and tried extracting better performance from contractors—by alternately increasing and relaxing oversight. While all of these steps are well-intentioned, recent policy statements, such as the Quadrennial Defense Review (QDR), and decisions on individual programs have fallen far short of the needed fundamental review reassessment, repriortization and reengineering efforts. For example, the Office of the Secretary of Defense (OSD) does not seem to be pushing for dramatic and fundamental reforms in its acquisition process. In fact, it has either disagreed with recommendations we have made over the past year or claimed that it was already addressing them. These include reports on specific systems such as JSF, the Missile Defense program, FCS, and Global Hawk as well as reports on cross-cutting issues, such as DOD’s rebaselining practices, acquisition policy, and support for program managers. We believe DOD’s recently issued QDR did not lay out a long term, resource constrained, investment strategy. In fact, the gap between wants, needs, affordability and sustainability seems to be greater than ever. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves making tough tradeoff decisions as to which programs should be pursued, and more importantly, not pursued, making sure programs are executable, locking in requirements before programs are ever started, and making it clear who is responsible for what and holding people accountable when these responsibilities are not fulfilled. These changes will not be easy to make. They require DOD to reexamine the entirety of its acquisition process—what we think of as the “Big A”. This includes making deep-seated changes to program requirements setting, funding, and execution. It also involves changing how DOD views success, and what is necessary to achieve success. The first, and most important, step is implementing a revised DOD-wide investment strategy for weapons systems. In a recent study on program management best practices, we recommended that DOD determine the priority order of needed capabilities based on assessments of the resources—that is dollars, technologies, time, and people needed to achieve these capabilities. We also recommended that capabilities not designated as a priority should be set out separately as desirable but not funded unless resources were both available and sustainable. DOD’s Under Secretary of Defense for Acquisition Technology and Logistics—DOD’s corporate leader for acquisition—should develop this strategy in concert with other senior leaders, for example, combatant commanders who would provide input on user needs; DOD’s comptroller; science and technology leaders, who would provide input on available resources; and acquisition executives from the military services, who could propose solutions. Finally, once priority decisions are made, Congress will need to enforce discipline through various authorization and appropriation decisions. Once DOD has prioritized capabilities, it should work vigorously to make sure each new program is executable before the acquisition begins. This is the “little a.” More specifically, this means assuring requirements are clearly defined and achievable given available resources and that all alternatives have been considered. System requirements should be agreed to by Service Acquisition Executives as well as Combatant Commanders. Once programs begin, requirements should not change without assessing their potential disruption to the program and assuring that they can be accommodated within time and funding constraints. In addition, DOD should prove that technologies can work as intended before including them in acquisition programs. This generally requires a prototype to be tested in an operational environment. More ambitious technology development efforts should be assigned to the science and technology community until they are ready to be added to future generations of the product. DOD should also require the use of independent cost estimates as a basis for budgeting funds. Our work over the past 10 years has consistently shown when these basic steps are taken, programs are better positioned to be executed within cost and schedule. To further ensure that programs are executable, DOD should pursue an evolutionary path toward meeting user needs rather than attempting to satisfy all needs in a single step. This approach has been consistently used by successful commercial companies we have visited over the past decade because it provides program managers with more achievable requirements, which, in turn, would facilitate shorter cycle times. With shorter cycle times, the companies we have studied have also been able to assure that program managers and senior leaders stay with programs throughout the duration of a program. DOD has policies that encourage evolutionary development, but programs often favor pursuing more exotic solutions that will attract funds and support. Lastly, to keep programs executable, DOD should demand that all go/no- go decisions be based on quantifiable data and demonstrable knowledge. These data should cover critical program facets such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Development should not be allowed to proceed until certain thresholds are met, for example, a high percentage of engineering drawings completed at critical design review. DOD’s current policies encourage these sorts of metrics to be used as a basis for decision making, but they do not demand it. DOD should also place boundaries on time allowed for specific phases of development and production. To strengthen accountability, DOD will need to clearly delineate responsibilities among those who have a role in deciding what to buy as well as those who have role in executing, revising, and terminating programs. Within this context, rewards and incentives will need to be altered so that success can be viewed as delivering needed capability at the right price and the right time, rather than attracting and retaining support for numerous new and ongoing programs. After all, given our current and projected fiscal imbalances, every dollar spent on a want today may not be available for an important need tomorrow. To enable accountability to be exercised at the program level, DOD will also need to (1) match program manager tenure with development or the delivery of a product;( 2) tailor career paths and performance management systems to incentivize longer tenures; (3) strengthen training and career paths as needed to ensure program managers have the right qualifications for run the programs they are assigned to; (4) empower program managers to execute their programs, including an examination of whether and how much additional authority can be provided over funding, staffing, and approving requirements proposed after the start of a program; and (5) develop and provide automated tools to enhance management and oversight as well as to reduce the time required to prepare status information. DOD also should hold contractors accountable for results. As we have recently recommended, this means structuring contracts so that incentives actually motivate contractors to achieve desired acquisition outcomes and withholding award fees when those goals are not met. In addition, DOD should collect data that will enable it to continually assess its progress in this regard. In closing, the past year has seen several defense reviews that include new proposed approaches to improve the way DOD buys weapons. These reviews contain many constructive ideas. If they are to produce better results, however, they must heed the lessons taught—but perhaps not learned—by acquisition history. Specifically, DOD must separate needs from wants in the context of the nation’s greater fiscal challenges. Policy must also be manifested in decisions on individual programs or reform will be blunted. DOD’s current acquisition policy is a case in point. The policy supports a knowledge-based, evolutionary approach to acquiring new weapons. The practice—decisions made on individual programs— sacrifices knowledge and executability in favor of revolutionary solutions. It’s time to challenge such solutions. Reform will not be real unless each weapons system is shown to be both a worthwhile investment and an executable program. Otherwise, we will continue to start more programs than we can finish, produce less capability for more money, and create the next set of case studies for future defense reform reviews. Mr. Chairman and Members of the Committee, this concludes my statement. I will be happy to take any questions. In preparing for this testimony, we relied on previously issued GAO reports and analyzed recent acquisition reform studies from various organizations. We conducted our review between March 20 and April 5, 2006, in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In the past 5 years, DOD has doubled its planned investments in weapons systems, but this huge increase has not been accompanied by more stability, better outcomes, or more buying power for the acquisition dollar. Rather than showing appreciable improvement, programs are experiencing recurring problems with cost overruns, missed deadlines, and performance shortfalls. GAO was asked to testify on ways to obtain a better return on DOD's weapons systems investments. This testimony identifies the following steps as needed to provide a better foundation for executing weapon programs: (1) developing a DOD-wide investment strategy that prioritizes programs based on realistic and credible threat-based customer needs for today and tomorrow, (2) enforcing existing policies on individual acquisitions and adhering to practices that assure new programs are executable, and (3) making it clear who is responsible for what and holding people accountable when these responsibilities are not fulfilled. Past GAO reports have made similar recommendations. DOD has a mandate to deliver high-quality products to warfighters, when they need them and at a price the country can afford. Quality and timeliness are especially critical to maintain DOD's superiority over others, to counter quickly changing threats, and to better protect and enable the warfighter. Cost is critical given DOD's stewardship responsibility for taxpayer money, combined with long-term budget forecasts which indicate that the nation will not be able to sustain its currently planned level of investment in weapons systems, and DOD's plans to increase investments in weapons systems that enable transformation of various military operations. At this time, however, DOD is simply not positioned to deliver high quality products in a timely and cost-efficient fashion. It is not unusual to see cost increases that add up to tens or hundreds of millions of dollars, schedule delays that add up to years, and large and expensive programs frequently rebaselined or even scrapped after years of failing to achieve promised capability. Recognizing this dilemma, DOD has tried to embrace best practices in its policies, and instill more discipline in requirements setting, among numerous other actions. Yet it still has trouble distinguishing wants from needs, and many programs are still running over cost and behind schedule. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves making tough tradeoff decisions as to which programs should be pursued, and more importantly, not pursued, making sure programs are executable, locking in requirements before programs are ever started, and making it clear who is responsible for what and holding people accountable when these responsibilities are not fulfilled. These changes will not be easy to make. They require DOD to re-examine the entirety of its acquisition process--what we think of as the "Big A"--including requirements setting, funding, and execution. Moreover, DOD will need to alter perceptions of what success means, and what is necessary to achieve success. |
The IMF, an organization of 186 countries, provides surveillance, lending, and technical assistance to its member countries. IMF surveillance involves the monitoring of economic and financial developments and the provision of policy advice, with key aims including financial crisis prevention. The IMF also lends to countries with balance-of-payments difficulties, including medium- to high-income countries, to provide temporary financing and support policies to achieve macroeconomic stability in the medium term. Its loans to low-income countries are to assist policies designed to foster economic growth and promote poverty reduction. In addition, the IMF provides countries with technical assistance and training in its areas of expertise. Upon request by a member country, an IMF loan is generally provided under a program that stipulates the specific policies and measures a country has agreed to implement. As part of an agreement to receive IMF financing, a member country may agree to implement policy measures, known as conditions, designed to resolve its balance-of-payments problems, overcome the problems that led it to seek financial aid, and help ensure that it can repay the IMF. The IMF also conducts periodic program reviews to assess whether the IMF-supported program is broadly on track and the country has met established conditions, or whether modifications are necessary to achieve the program’s objectives. Based on these reviews of a country’s performance, including whether the country has implemented conditions according to a specific timetable, the IMF Board determines whether the country will receive subsequent installments of IMF funding. The IMF’s resources are provided by member countries through quota contributions, loan provisions to the IMF, and member contributions for lending to low-income countries. When a country joins the IMF, the country pays a quota to the organization, which may increase when IMF members agree to increase the IMF’s capital. Each IMF member country is assigned a quota, based broadly on its relative size in the world economy. A member’s quota determines its maximum financial commitment to the IMF and its voting power and has a bearing on the amount of IMF financing a member can receive. The IMF also may enter into multilateral or bilateral borrowing arrangements to increase its resources. Moreover, resources for IMF- supported programs to low-income countries come from funding outside quota resources, including member contributions and the IMF. In April 2009, the G-20 world leaders endorsed measures to significantly increase the IMF’s available resources to $750 billion and also supported new SDR allocations of about $250 billion to increase the international reserves of member countries. An additional special SDR allocation of about $30 billion came into effect in September 2009 under the Fourth Amendment. The source of IMF’s increase in resources includes both increased quota contributions and a significant amount of borrowing from some member countries. Part of these additional resources includes funding from the United States. In June 2009, the U.S. Congress passed legislation that appropriated about $7.8 billion for an increase in the U.S. quota to the IMF. It also made available up to about $117.5 billion for loans to the IMF. Congress set some provisions for the IMF as part of the legislation, including language directing U.S. officials to oppose loans to countries that have repeatedly supported acts of international terrorism and to programs that would force developing countries to cap spending on health care and education. When signing the legislation, the President issued a statement stating that certain provisions within the legislation would interfere with his constitutional authority to conduct foreign relations and that he would not treat these provisions as limiting his ability to engage in foreign diplomacy or negotiations. In addition to the increase in the IMF’s overall resources, the IMF Board agreed in July 2009 to measures that will boost the IMF’s concessional lending capacity available to low-income countries to $17 billion through 2014. Since August 2008, the IMF has dramatically increased its commitment to lend to member countries in response to the global economic crisis, as shown in table 1. The table categorizes the number of countries and amount of agreed upon IMF funding categorized by various IMF lending arrangements, which are developed and tailored to address the specific circumstances of its member countries. As shown in table 1, the number of countries with Stand-By Arrangements, typically middle- and high-income countries facing crisis and seeking to resolve their short-term balance of payments problems, have significantly increased. See appendix II for a list of the countries that have been approved to receive funding under an IMF- supported program as of August 2009. The IMF-supported program is intended to help countries achieve macroeconomic stability and contains objectives, targets, macroeconomic policies, and conditions. The design of an IMF-supported program involves a complex process, which includes the use of a macroeconomic framework, economic judgment, and discussions between the IMF staff and country officials regarding trade-offs among different objectives and policies. An IMF-supported program is defined by its objectives and the link between those objectives and the macroeconomic policies used to achieve them, with the specific content of a country’s program reflecting the country’s characteristics and circumstances. For example, IMF-supported program objectives in low-income countries are broadly geared toward the long-term goals of increasing economic growth and reducing poverty. In middle- and high-income countries, particularly those experiencing acute crises, IMF-supported programs generally aim to stem capital outflows, restore confidence, and bring about a recovery in the short- to medium- term. The macroeconomic framework used to develop the IMF-supported program consists of the following elements as illustrated in figure 1: Targets for key macroeconomic variables: IMF-supported programs include targets to help achieve a country’s objectives. In low-income countries, programs may include quantitative targets to limit the growth of the money supply and the budget deficit to lower inflation and maintain debt levels consistent with macroeconomic stability. In middle- and high- income countries, an IMF-supported program may include targets to stabilize the exchange rate, raise interest rates, and tighten credit conditions to stem the outflow of capital and restore investor confidence. Macroeconomic policies: Four types of macroeconomic policies— monetary, fiscal, external, and structural reforms—may be used to help a country achieve its objectives. Monetary policy is used to affect the growth of the money supply and interest rate levels. Fiscal policy sets the amount and composition of government expenditures and government revenue to affect the size of the budget deficit. External policies concern the size of the trade deficit and the exchange rate, which can influence the amount of exports and imports and the buildup of international reserves. IMF- supported programs also are likely to include “macro-critical” structural reforms intended to help countries establish sound financial sector regulatory systems. Conditions: IMF-supported programs may contain quantitative performance criteria (QPC) and macroeconomic structural conditions, which the country has agreed to implement to receive IMF disbursements. Examples of QPCs that address macroeconomic policy variables include (1) limits on government borrowing to curb the public debt, (2) a ceiling on the expansion of the money supply to manage the interest rate or contain inflation, or (3) a floor on international reserves to stabilize the exchange rate and weather shocks. According to the IMF, a country’s performance criteria are frequently revised during the course of a program as external conditions evolve in a way not initially foreseen. Structural conditions are measures that the IMF and country authorities consider critical for the successful implementation of the program. Structural conditions may include specific measures to strengthen banking supervision, reform the tax system, improve fiscal transparency, and build up social safety nets. Although structural conditions are no longer used as performance criteria, the structural reforms that the IMF sees as critical to a country’s recovery will continue to be monitored. Because of the extensive linkages among the various economic sectors and the mutual dependence of policy instruments and targets, designing an IMF-supported program is complex, iterative, and requires economic judgment. The process of designing the program begins when country authorities contact the IMF due to impending or ongoing macroeconomic instability or crisis. The IMF staff and country authorities establish the country’s need for borrowing based on the specific underlying macroeconomic problems to be addressed and discuss program objectives and policies. Using a macroeconomic framework as an analytical basis for the IMF-supported program, IMF staff and country authorities use an iterative process to set targets for key macroeconomic variables, such as real GDP growth, inflation, and the budget deficit, and identify specific economic policies intended to achieve these targets. The framework is intended to ensure consistency of the target values for the macroeconomic variables among the various sectors of the economy and to align the IMF’s macroeconomic and structural policy advice with the program’s objectives, while incorporating country-specific factors. For example, the macroeconomic framework might be used to inform discussions about how to achieve the target of a lower budget deficit by weighing policy options that include reducing expenditures, raising revenues, obtaining grant financing, or modifying the goal. The process for designing the program usually culminates with country authorities and IMF staff agreeing on the goals, targets, and policies for a program that is both technically feasible and politically acceptable. Then, they seek agreement on the conditions, including QPCs and structural reforms, needed to complete the program. The IMF Board is then asked to discuss and approve the program. In designing an IMF-supported program, a government has to make difficult trade-off decisions among different priority objectives and policies consistent with macroeconomic stability. The trade-offs required reflect the specific country circumstances and the severity of the macroeconomic imbalances. According to an IMF/World Bank document, there is a substantial gray area for a combination of levels of key macroeconomic variables, including growth, inflation, fiscal deficit, current account deficit, and international reserves that lies between macroeconomic stability and instability, where a country could enjoy a degree of stability, but where macroeconomic performance could clearly be improved (see fig. 1). This gray area allows for negotiation between the country authorities and the IMF staff. This process can be illustrated in the following hypothetical example. A country can be adversely affected by the global recession in a number of ways, including through declining export revenue as the price and volume of exports fall. The decline in exports is likely to lead to a rise in unemployment as private sector activity declines, contributing to increased poverty. Any effort on the part of the government to increase spending to counter this fall in aggregate demand is likely to increase the budget deficit, especially in the context of falling revenues. The increased spending also may contribute to inflationary pressures. If the country, prior to the crisis, was already experiencing a high debt level and an elevated inflation rate, the way forward can be quite complicated. If the budget deficit were to rise by too much, inflation could increase to a point where the country would be at risk of macroeconomic instability, while insufficient domestic spending could lower economic growth, worsening the unemployment situation and increasing poverty. Financial assistance through an IMF-supported program can help mitigate the situation, but the government of the hypothetical country in this example still faces a difficult policy trade-off—how to reduce the impact of the recession on the poor without risking high inflation. Within this overall context, the discussions between IMF staff and government officials are likely to explore targets for the budget deficit and inflation that attempt to balance these two concerns. While the range of discussion regarding these targets is limited to what will achieve or maintain macroeconomic stability, country officials and IMF staff may have different perspectives on the risks of macroeconomic instability, the degree to which spending cuts will harm the poor, and the political feasibility of gaining parliamentary approval for significant spending cuts. These different perspectives could provide the basis for negotiation for the targets contained within the program. IMF-supported programs in each of the four countries we reviewed contain different sets of objectives, targets, and conditions that reflect each country’s individual circumstances, trade-offs, and negotiations with IMF staff. The program in postconflict Liberia focuses on rebuilding capacity, whereas the program in Zambia, a country with significant natural resources, but substantial poverty, centers on increasing economic growth. Hungary, a middle-income country, faced a risking risk of debt default; thus, the program concentrates on restoring investor confidence and reducing debt and expenditures. A banking and currency collapse precipitated Iceland’s request for a program, which focuses on recapitalizing the banks and stabilizing the currency. While IMF staff and country officials in all four countries generally agreed with the programs and have made progress in implementing the programs, each country has encountered some challenges in implementing conditions or achieving targets. Figures 2, 4, 6, and 8 show country background information for Liberia, Zambia, Hungary, and Iceland. Table 2 summarizes the context, circumstances, and objectives of the IMF-supported programs we reviewed. For low-income countries, empirical evidence suggests inflation is detrimental to economic growth after it exceeds a critical threshold, which is broadly consistent with the inflation targets included in the IMF- supported programs we reviewed. Similarly, for middle- and high-income countries, the academic literature identifies weaknesses in macroeconomic policies that often precede economic crises that are consistent with the policies in the IMF-supported programs we reviewed. Inflation targets are a prominent feature of IMF-supported programs in low-income countries, but there has been considerable debate about appropriate targets for these countries. Some believe that the IMF may have gone beyond the existing empirical evidence in targeting very low inflation, potentially compromising economic growth. Although the precise relationship between inflation and economic growth in low- income countries remains uncertain, the empirical literature over the past 10 years has reached consensus about some aspects of this complex relationship. For example, there is general agreement about the following in low-income countries: The relationship between inflation and growth varies depending on the level of inflation and potentially on other country-specific variables. Inflation is detrimental to medium- and long-term economic growth after it exceeds a critical threshold, implying that inflation can compromise growth if it is “too high.” Inflation does not negatively affect economic growth at low levels—in fact, the relationship can be positive for this limited range. Reducing inflation below this critical threshold produces no additional economic growth or may cause a country to sacrifice some long-term growth benefits. These findings are represented in figure 10, which provides an illustrative summary of the impact of rising inflation on economic growth in low- income countries. The central finding from the literature is that relationship is nonlinear with a critical point (point 1) where the relationship between inflation and growth changes significantly. To the left of point 1 in figure 10, inflation does not harm economic growth and, in most cases, a positive relationship is established. Beyond point 1, countries face a trade-off between the short-term cost associated with lowering inflation to achieve higher medium- and long-term growth and the cost of accepting higher inflation and therefore lower future growth. Anti-inflationary policies in the region left of point 1 can be counterproductive since the potential costs of reducing inflation are not offset by increases in economic growth or, at worst, can be compounded by declines in long-term growth rates. However, as inflation rises and the critical threshold is breached, the negative effects of inflation surface. Beyond this point, any increases in the inflation rate result in additional declines in economic growth. As points 2 and 3 show, the marginal costs to long-term growth can either increase or decrease as inflation rises, although empirical research does not reach a consensus as to whether the relationship changes for moderate inflation rates. As a country approaches hyperinflation (point 4), it likely faces a complete breakdown in economic functioning. However, examining the inflation-growth relationship under these conditions generally goes beyond the scope of the literature we reviewed. Earlier academic literature raised doubts about whether moderate to high inflation was costly to economic growth, but recent literature suggests that the cost of inflation was understated because researchers failed to acknowledge the complex, or nonlinear, natur relationship between inflation and economic growth. Although not definitive, the empirical literature supports the focus of IMF- supported programs on containing inflation in low-income countries. Excluding those with countries in currency unions, or with currency boards, IMF-supported programs in low-income countries consistently target inflation in the 5 to 10 percent range. The 31 IMF-supported programs in low-income countries, as of July 2009, are designed to help the countries achieve and maintain macroeconomic and financial stability, including targeting single-digit inflation to avoid the risk that rising prices pose to economic growth. According to IMF documents, inflation above 10 percent is generally considered harmful to medium-term growth, while targeting inflation below 5 percent may not be appropriate given the potential benefits of modest inflation on product and labor markets. Of the 31 IMF-supported programs, the 20 programs that did not involve countries participating in currency unions all targeted single-digit inflation at no less than 5 percent (see table 3). In some cases, we found shorter- term inflation targets above 10 percent. For example, Sao Tome and Principe, and Guinea both have short-term targets in low double digits as part of the longer-term objective of gradually reducing inflation to the single digits. The 11 other countries with IMF-supported programs participate in common currency unions (or operate a currency board) with inflation targets of 3 percent or lower that are set indirectly by the choice of exchange rate arrangements, not the IMF. Inflation targets in the common currency unions in table 3 are driven by the decision to maintain a fixed exchange rate that requires member countries to keep the range of inflation close to inflation levels in the developed country or area to which the common currency is fixed. Almost all of the countries included in the lower half of table 3 belong to currency unions that peg their currency to either the euro or, in the case of Grenada, the U.S. dollar. The only exception is Djibouti, which is not part of a currency union but operates a currency board tied to the U.S. dollar. As a result, the inflation targets for these countries are largely independent of IMF inflation policy and tend to approximate the inflationary experiences of the euro area or the United States. The empirical literature we reviewed, consisting of nine studies, provides estimates of the inflation rate (ranging from 3 to 18 percent) beyond which inflation hampers long-term growth for low-income countries. Inflation targets in IMF-supported programs, although at the lower end of the 3 to 18 percent range, are generally consistent with estimates in empirical studies. (See fig. 11.) Eliminating the lowest and two highest estimates produces an approximate range of 5 to 12 percent for the threshold, which is roughly consistent with the inflation targets of 5 to 10 percent in the 31 IMF- supported programs for low-income countries we reviewed. Across the studies, the estimated threshold is generally found to be higher for low- income than for high-income economies, providing some support for inflation targets in IMF-supported programs that tolerate higher inflation in low-income countries. (The bibliography for inflation-growth literature contains a full listing of the studies.) While there is some disagreement on where the critical threshold lies, there is consensus that it is lower than the level suggested by some studies prior to 2000. These studies provided some evidence that inflation did not significantly hurt economic growth until it reached the 20 to 40 percent range, but these findings have not proven robust for a number of reasons. Specifically, some of the threshold calculations were based on judgment rather than empirical estimation. The complexity of the relationship between inflation and economic growth uncovered by the literature indicates it may be inappropriate to set a uniform policy target applicable to all countries. The empirical literature illustrates that combining countries at different levels of development can produce unreliable results both in the estimating of the threshold in the inflation-growth relationship and the effect of low and high inflation on economic growth. This calls for flexibility in the implementing inflation targets in individual countries because the “growth-maximizing” rate of inflation might be expected to differ, at least somewhat, even across low- income countries at various stages of development. For example, after increasing the sample of low-income countries, some researchers found that the estimated inflation threshold increased from 6 to 17 percent. Similarly, another study found no clear-cut threshold for developing countries and attributes this finding to the differences across countries grouped under the “developing” country label. One interpretation of these findings is that, even though the literature generally supports inflation targets in IMF-supported programs, the gains from a given reduction in inflation to single digits from somewhat higher levels might by the cost in some cases. As a result, the relationships and trade-offs should be carefully evaluated for each country. Moreover, the methodologies and the validity of the estimates vary across studies, and none should be viewed as definitive. Although recent empirical literature is informative and addresses a number of methodological issues that plagued earlier studies, other limitations remain. Important limitations of the literature analyzing the relationship between inflation and economic growth include (1) potential biases due to the omission of important variables; (2) biases resulting from the failure of some studies to address the fact that, in addition to inflation impacting economic growth, economic growth may also affect inflation; (3) issues with small samples and outlier effects; and (4) technical issues related to the modeling procedures employed. Moreover, the negative effects on the economy may extend beyond the effects on economic growth. As a result, studies that investigate only the impact of inflation on economic growth may understate the total negative effects of inflation. Finally, there are relatively few systematic analyses of the critical threshold beyond which inflation hampers long-term growth; therefore, there is likely to be continued disagreement on where the threshold lies. The academic literature identifies weaknesses in macroeconomic policies preceding economic crises. We focused on this aspect of the academic literature because, as previously noted, middle- and high- income countries have sought significant IMF financial assistance due to stress associated with the global financial crisis. Empirical academic literature we reviewed, focused on anticipating and explaining economic crises, attempts to identify leading indicators of crises, including weaknesses in macroeconomic policies. This body of research is known as the crisis “early warning system” literature. Economic crises include in particular (1) currency crises, which involve a speculative attack or large depreciation of a currency; (2) banking crises, which involve the widespread failure of critical financial institutions; and (3) sovereign debt crises, which involve the default or near default of a government on its debt obligations. Economic crises can involve one or more of the types of crises described above and often precipitate moderate to severe recessions involving substantial losses of jobs, income, and production. By analyzing a broad history of crises—crises that have occurred in dozens of countries over the last several decades—researchers have tried to identify common precursors of crises that may provide “early warning” that a crisis is coming. We reviewed 19 published or widely cited studies, identified in economic research databases, or in citations of other studies, written since 1998. For each study, we documented the variables that were predictive of an economic crisis. Across the studies, several macroeconomic policy variables (and many variables unrelated to macroeconomic policy) consistently predict currency, banking, or debt crises. These variables suggest a number of specific policy weaknesses that make countries more vulnerable to crises, especially high inflation, high public indebtedness, and an overvalued currency. These vulnerabilities can represent unsustainable monetary, fiscal, and exchange rate policies, such as (1) high money growth, inconsistent with the government’s commitment to a fixed exchange rate or broad exchange rate stability; (2) large and persistent budget deficits that significantly increase the stock of public debt and jeopardize the ability of the government to meet its obligations; or (3) a significant inflow of foreign capital without the central bank accumulating a buffer of international currency to guard against sudden capital outflows. Table 4 contains the specific macroeconomic policy variables identified by the literature and the nature of the related vulnerabilities. Several factors unrelated to macroeconomic policy also often precede crises, including external events (e.g., a crisis in a neighboring country), private sector behavior (e.g., significant private sector borrowing), and institutional or political factors (e.g., weak law enforcement). Crises are not singularly caused by unsustainable macroeconomic policies; there are often other factors or triggers that increase the likelihood of a crisis. Nevertheless, crises do not strike at random, poor macroeconomic policy management can greatly increase a country’s vulnerability to crisis. While the crisis “early warning system” literature can inform the development of macroeconomic policy, it has limitations and provides qualitative rather than precise quantitative guidance. Conceptually, the literature identifies certain costs associated with loose macroeconomic policies that are important considerations to include in the full accounting of the costs and benefits associated with macroeconomic policies, but these are not the only considerations. Furthermore, the literature has a number of methodological limitations. Specific limitations that we identified during our review of the crisis literature include selecting a sample of countries in a way that overstates the predictability of crises, analytical approaches that fail to account for multiple causes of economic crises, and crisis definitions that may not correspond to severe economic consequences (e.g., the inclusion of speculative attacks that did not result in actual currency depreciation in the definition of currency crises). The limitations we identified may introduce biases into estimates of policy effects in studies where they are present. The exclusion of noncrisis countries in some analyses may, for example, overstate the level of international reserves necessary to guard against crises in some countries. These limitations suggest caution in interpreting the precise numerical results of the literature. The central macroeconomic policy weaknesses identified by the literature closely correspond to the macroeconomic policy areas upon which IMF- supported SBA programs focus. The IMF-supported programs in middle- and high-income countries we reviewed have macroeconomic program requirements (generally QPCs) designed to limit money growth and inflation, limit public debt, and accumulate international reserves. Similarly, high debt, high inflation, and low international reserves are important macroeconomic policy weaknesses that make a country more vulnerable to crises. We reviewed certain quantitative macroeconomic program requirements in all SBA programs in which governments have drawn IMF funds, which included 13 countries for which program documents were available as of July 2009. Because SBA programs are for countries considered to have temporary balance-of-payments needs, these countries are generally middle- or high-income countries that have been adversely affected by the global financial crisis. In each of the 13 programs, we found that quantitative program requirements supported the broad goals of (1) limiting money growth and inflation, (2) limiting public debt, and (3) accumulating international reserves. To support the broad goals identified above, IMF-supported programs often used a variety of specific policy variables. To limit money growth and inflation, SBA programs featured QPCs and other program requirements for a number of macroeconomic policies. These included ceilings on the net domestic assets of the central bank, inflation consultations, and ceilings on the amount of credit the central bank may extend to the government or the private sector. To limit public debt, 11 of the 13 SBA programs had QPCs for the government’s budget deficit, and some had restrictions on the government issuing new external debt, and short-term external debt in particular. To accumulate international reserves, 12 of the 13 SBA programs had floors for either net international reserves or net foreign assets. With respect to our case study countries, we have noted above that a large stock of public debt was a key factor driving the crisis in Hungary. In Iceland, the IMF noted in 2007 that the króna was overvalued by 15 to 25 percent, and inflation rose sharply before the banking crisis ensued. The focus of IMF-supported programs on the central policy weaknesses should assist countries in regaining investor confidence and addressing the underlying crisis vulnerabilities. The Department of the Treasury provided written comments on a draft of this report, which are reprinted in appendix III. Treasury stated that it fully concurs with our conclusions that IMF-supported programs are shaped by negotiations with local officials in the context of country circumstances. In addition, Treasury noted our finding that the underlying economic literature on growth, inflation, fiscal and external sustainability, and financial stability, drive IMF policy advice in lending programs. Treasury also emphasized that it encourages the IMF to work with low-income countries to increase spending in areas such as health and education. Furthermore, we received technical comments on a draft of this report from Treasury and the IMF, which we incorporated as appropriate. We are sending copies of this report to other congressional offices, Treasury, and the IMF.. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix IV. To examine the process for designing an International Monetary Fund (IMF)-supported program, we reviewed and analyzed IMF’s public documents and data. These documents include IMF Articles of Agreement; countries’ letters of intent; IMF consultation reports and assessments of country progress under IMF-supported programs, including the Article IV consultation reports; program design papers; strategy papers; policy documents; and IMF’s Independent Evaluation Office reports. We also met with officials representing the IMF and the U.S. Department of the Treasury (Treasury) in Washington, D.C., to discuss the process of designing of an IMF-supported program and the typical trade-offs associated with the program’s policy decisions. Specifically, we used IMF policy documents and countries’ letters of intent to identify the primary objectives and key targets of low- and middle- to high-income countries. We also reviewed the letters of intent provided to the IMF by countries receiving current IMF-supported financial programs, as well as updates of program reviews conducted by the IMF for these programs, available as of July 2009, to obtain examples of macroeconomic target variables, quantitative performance criteria, and structural reforms. We utilized interviews with IMF officials, as well as IMF policy papers, including IMF institute documents, to clarify and explain the process of designing an IMF-supported program. To examine the IMF-supported programs in recipient countries, we selected four case study countries—Hungary, Iceland, Liberia, and Zambia—and reviewed documents and interviewed officials regarding these countries’ IMF-supported programs. Our selection of these four countries include criteria to examine how IMF-supported programs may differ in low-, middle-, and high-income countries with geographic diversity that had relatively large amounts of IMF financing as of April 2009. Our choice of countries was meant to be illustrative, not representative or generalizable to the population of IMF-supported programs. Documents we reviewed include the letters of intent provided to the IMF by these four countries; IMF reviews, reports, and press releases regarding these countries’ IMF-supported programs; and country background documents provided by U.S. embassies in these four countries. In addition, we interviewed IMF and Treasury officials in Washington, D.C., to discuss the context of the four recipient countries and the countries’ IMF-supported programs. In Hungary, Iceland, Liberia, and Zambia, we met with U.S. embassy officials, IMF staff, officials representing foreign governments, academics, and nongovernmental organizations to obtain their views about the IMF-supported programs. To examine the extent to which the findings of empirical economic studies are consistent with the IMF’s macroeconomic policies, we reviewed IMF documents and empirical studies. Using published or widely cited empirical studies identified in economic research databases EconLit, Social Science Research Network, and Google Scholar, we identified key relationships between macroeconomic policies and economic growth and crises, and compared them with macroeconomic policies in IMF- supported programs. Specifically, we reviewed two parts of the academic literature relevant to the macroeconomic policy decisions in IMF- supported programs, namely the literature that links inflation with economic growth, and the crisis “early warning system” literature that identifies leading indicators of currency, banking, and debt crises. For the inflation and growth literature, we reviewed empirical studies isolating the threshold level of inflation explicitly for lower-income countries identified in economic research databases and published since 1999. Due to the lag between publication and submission, we also included working papers produced during the last 2 years. Although we found these studies to be reliable for the purposes of identifying the range of estimates and comparing them with targets in IMF-supported programs, their inclusion in this report does not imply that we deem them to be definitive. (The bibliography contains a full listing of the studies linking inflation with economic growth that we reviewed.) To determine the inflation targets in all 31 IMF-supported programs in low-income countries, as of July 2009, we reviewed countries’ letters of intent and IMF Article IV reports and recorded the target for each country. To ensure the data were collected consistently and accurately, each recorded target was independently reviewed and verified. In a few cases where no clear target was articulated, we used the long-term inflation projection for the country as the implicit target. For the crisis “early warning system” literature, we reviewed 19 published or widely cited studies, identified in economic research databases, or in citations of other studies, written since 1998. We chose 1998 as the starting year because of the large volume of research explaining or predicting economic crises that was performed after the Asian financial crisis. For each study, we documented the variables that were predictive of an economic crisis. Where possible, we used a conservative standard for identifying key predictor variables. Under the signals method, we selected only variables that had a noise-to-signal ratio (NTSR) of 0.5 or less (less than 1 indicates more signal than noise). Noise indicates an incorrect prediction while a signal indicates a correct prediction, so a NTSR of less than 1 implies more correct than incorrect predictions. Under traditional regression methods, we selected only variables whose coefficients had p-values less than 0.05. (Several studies we reviewed reported coefficients that were statistically significant at the 10 percent level.) (The bibliography contains a full listing of studies in the “early warning system” literature that we reviewed.) Similar to our identification of inflation targets in IMF-supported programs in low- income countries, we determined certain quantitative macroeconomic program requirements in all Stand-By Arrangements in which governments have drawn IMF funds, which included 13 countries for which program documents were available as of July 2009. We reviewed these countries’ letters of intent, determined and recorded the policies associated with the macroeconomic program requirements for each country, then independently verified that the requirements data were collected consistently and accurately. We conducted our work from November 2008 to November 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Table 5 shows the countries that have been approved to receive financial assistance from the IMF as of August 31, 2009. The recipient countries are categorized by type of IMF lending arrangement. In addition to the individual named above, Cheryl Goodman, Assistant Director; Lawrance Evans, Assistant Director; Marc Castellano; Michael Hoffman; Victoria Lin; and Roberta G. Steinman made key contributions to this report. The team benefited from the expert advice and assistance of Lynn Cothern, Etana Finkler, Joel Grossman, and Mary Moutsos. To examine the relationship between economic growth and inflation in low-income countries, we reviewed the following academic articles. Our review of the literature included academic studies, using data from low- income countries, published from 1999 to 2009. Due to the lag between publication and submission, we included working papers produced in the last 2 years. The studies indicated with an asterisk (*) provided empirical estimates of the threshold level of inflation for low-income, developing or nonindustrial countries that serve as the basis for figure 11 in the report. The list includes widely cited studies published prior to 1999, although they either combine low-income and high-income countries when estimating the threshold inflation rate or determine the threshold on the basis of judgment. Barro, Robert. “Inflation and Economic Growth,” Federal Reserve Bank of St. Louis Review 78 (1996). Bruno, Michael and William Easterly. “Inflation Crises and Long-Run Growth,” Journal of Monetary Economics 41 (1998). Bruno, Michael and William Easterly. “Inflation and Growth: In Search of a Stable Relationship,” Review of Federal Reserve Bank of St. Louis 78, no. 3 (1996). *Burdekin, Richard C.K., Arthur T. Denzau, Manfred W. Keil, Thitithep Sitthiyot, and Thomas D. Willett, “When Does Inflation Hurt Economic Growth? Different Nonlinearities for Different Economies,” Journal of Macroeconomics 26 (2004). Fischer, Stanley. “The Role of Macroeconomic Factors in Growth,” Journal of Monetary Economics 32 (1993). Ghosh, Atish and Stephen Phillips. “Warning: Inflation May be Harmful to Your Growth,” IMF Staff Papers 45 (1998). *Gillman, Max, Mark Harris, and Laszlo Matyas. “Inflation and growth: explaining a negative effect,” Empirical Economics 29, no. 1 (2004). Gylfason, Thorvaldur and Tryggvi T. Herbertsson. “Does Inflation Matter for Growth?,” Japan and the World Economy 13 (2001). *Khan, Mohsin S. and Abdelhak S. Senhadji. “Threshold effects in the Relation Between Inflation and Growth,” IMF Staff Papers 48 (2001). *Kochar, Kalpana and Sharmini Coorey. “Economic Growth: What Has Been Achieved So Far and How?”, in Hugh Bredenkamp and Susan Schadler (eds), Economic Adjustment and Reform in Low-Income Countries (Washington, D.C.: International Monetary Fund, 1999). *Kremer, Stephanie, Alexander Bick and Dieter Nautz, “Inflation and Growth: New Evidence from a Dynamic Panel Threshold Analysis,” Sonderforschungsbereich Discussion Paper 649, Humboldt University, Berlin, Germany (2009). *Kremer, Stephanie, Alexander Bick and Dieter Nautz, “Inflation and Growth: New Evidence from a Panel Data Threshold Analysis,” Goethe University Working Paper, Frankfurt (2008). *Pollin, Robert and Andong Zhu, “Inflation and Economic Growth: A Cross-Country Non-linear Analysis,” Journal of Post Keynesian Economics 28, no. 4 (2006). Sarel, Michael. “Nonlinear Effects of Inflation on Economic Growth,” IMF Staff Papers 43 (1996). *Sepehri, Ardeshir and Saeed Moshiri. “Inflation-Growth Profiles Across Countries: Evidence from Developing and Developed Countries,” International Review of Applied Economics 18, no. 2 (2004). *Vaona, A. and S. Schiavo, “Nonparametric and Semiparametric Evidence on the Long-run Effects of Inflation on Growth,” Economics Letters 94, no. 3 (2007). We reviewed the following empirical studies in the crisis “early warning system” literature written since 1998. For each study, we documented the variables that were predictive of an economic crisis. Where possible, we used a conservative standard for identifying key predictor variables. Under the signals method, we selected only variables that had a noise-to-signal ratio (NTSR) of 0.5 or less (less than 1 indicates more signal than noise). Noise indicates an incorrect prediction while a signal indicates a correct prediction, so a NTSR of less than one implies more correct than incorrect predictions. Under traditional regression methods, we selected only variables whose coefficients had p-values less than 0.05. (Several studies we reviewed reported coefficients that were statistically significant at the 10 percent level.) Berg, Andrew and Catherine Patillo. “Predicting Currency Crises: The Indicators Approach and an Alternative,” Journal of International Money and Finance 18 (1999). Borio, Claudio and Philip Lowe. “Assessing the Risk of Banking Crises,” BIS Quarterly Review (December 2002). Burkart, Oliver and Virginie Coudert. “Leading Indicators of Currency Crises for Emerging Countries,” Emerging Markets Review 3 (2002). Bussiere, Matthieu and Marcel Fratzscher. “Towards a New Early Warning System of Financial Crises,” Journal of International Money and Finance 25 (2006). Ciarlone, Alessio and Giorgio Trebeschi. “Designing an Early Warning System for Debt Crises,” Emerging Markets Review 6 (2005). Davis, E. Philip and Dilruba Karim. “Comparing Early Warning Systems for Banking Crises.” Journal of Financial Stability 4 (2008). Demirgüç-Kunt, Asli and Enrica Detragiache. “The Determinants of Banking Crises in Developing and Developed Countries,” IMF Staff Papers 45, no. 1 (1998). Edison, Hali J. “Do Indicators of Financial Crises Work? An Evaluation of An Early Warning System,” International Journal of Finance and Economics 8 (2003). Esquivel, Gerardo and Felipe Larrain. “Explaining Currency Crises.” Harvard Institute for International Development (1998). Hemming, Richard, Michael Kell and Axel Schimmelpfennig. “Fiscal Vulnerability and Financial Crises in Emerging Market Economies,” Occasional Paper 218 (Washington, D.C.: International Monetary Fund, 2003). Kamin, Steven B., John Schindler, and Shawna Samuel. “The Contribution of Domestic and External factors to Emerging Market Currency Crises: An Early Warning Systems Approach,” International Journal of Finance and Economics 12 (2007). Kaminsky, Graciela L. “Currency Crises: Are They All the Same?” Journal of International Money and Finance 25 (2006). Kaminsky, Graciela, Saul Lizondo, and Carmen Reinhart. “Leading Indicators of Currency Crises,” IMF Staff Papers 45 (March 1998). Kaminsky, Graciela L. and Carmen Reinhart. “The Twin Crises: The Causes of Banking and Balance of Payments Problems,” American Economic Review 89 (1999). Komulainen, Tuomas and Johanna Lukkarila. “What Drives Financial Crises in Emerging Markets?” Emerging Markets Review 4 (2003). Kumar, Mohan, Uma Moorthy, and William Perraudin. “Predicting Emerging Market Currency Crashes.” Journal of Empirical Finance 10 (2003). Manasse, Paolo, Nouriel Roubini, and Axel Schimmelpfennig. “Predicting Sovereign Debt Crises,” IMF Working Paper WP/03/221 (November 2003). Manasse, Paolo and Nouriel Roubini. “‘Rules of Thumb’ for Sovereign Debt Crises,” IMF Working Paper WP/05/42 (March 2005). Shimpalee, Pattima L. and Janice Boucher Breuer. “Currency Crises and Institutions,” Journal of International Money and Finance 25 (2006). | The International Monetary Fund (IMF) has significantly increased its total committed lending to countries from about $3.5 billion in August 2008 to about $170.4 billion in August 2009, as countries have been severely affected by the global economic crisis. IMF-supported programs are intended to help countries overcome balance-of-payments problems, stabilize their economies, and restore sustainable economic growth. Critics have long-standing concerns that the IMF has an overly austere approach to macroeconomic policy that does not sufficiently heed country viewpoints. To help address these concerns, the IMF recently stated that it has changed its policies, including by increasing its flexibility. GAO was asked to examine (1) the process for designing an IMF-supported program, (2) the IMF-supported programs in four recipient countries, and (3) the extent to which the findings of empirical economic studies are consistent with the IMF's macroeconomic policies. GAO analyzed IMF and recipient country documents; interviewed U.S., IMF, and foreign government officials, conducting fieldwork in four relatively large recipient countries; and analyzed published or widely cited empirical studies. GAO received written comments from the Department of the Treasury, noting its concurrence with the report's conclusions. Designing an IMF-supported lending program involves a complex, iterative process based on projections for key macroeconomic variables; discussions between IMF staff and country officials regarding program goals, policies, and trade-offs; use of economic judgment; and IMF Executive Board approval. An IMF-supported program is intended to help countries achieve their objectives in the context of macroeconomic stability. Programs in low-income countries are broadly geared toward increasing economic growth and reducing poverty, and generally strive for low inflation and sustainable levels of debt. In middle- and high-income countries, programs generally aim to stem capital outflows, restore confidence, and stabilize the exchange rate by, for example, setting targets for budget deficits and international reserves. Trade-offs among the different combinations of objectives and policies allow for negotiations between the IMF staff and country officials, reflecting what is technically feasible and politically acceptable. IMF-supported programs in the four countries GAO reviewed--Liberia, Zambia, Hungary, and Iceland--include different sets of objectives, targets, and conditions that reflect country circumstances, based on negotiations between the IMF staff and country officials. In postconflict Liberia, the program focuses on rebuilding capacity and contains a target for maintaining a balanced budget with no borrowing. In Zambia--a country negatively affected by the recent economic crisis--the IMF-supported program is designed to increase economic growth, reduce poverty, and improve governance. Hungary, which faced a rising risk of default, has a program that focuses on restoring investor confidence while reducing debt and expenditures. A banking and currency collapse in Iceland precipitated the IMF-supported program, which contains some controversial approaches to monetary policy and banking reform. All four countries are making progress but face challenges in implementing conditions or achieving targets in their IMF-supported programs. The macroeconomic policies in IMF-supported programs are broadly consistent with the findings of the empirical literature GAO reviewed, although this literature lacks precise guidance for setting policy targets. For low-income countries, empirical evidence generally suggests inflation is detrimental to economic growth after it exceeds a critical threshold, which is broadly consistent with the inflation targets included in the IMF-supported programs we reviewed. For middle- and high-income countries, the literature identified specific policy weaknesses in advance of crises, including high inflation, high public indebtedness, and low international reserves. These weaknesses are consistent with the policies upon which the IMF focuses in the 13 programs in middle- to high-income countries GAO reviewed. |
Deepwater is the largest and most complex procurement project in the Coast Guard’s history. The acquisition is scheduled to occur over a 30-year period at a projected cost of $17 billion. It includes the modernization and replacement of an aging fleet of over 90 cutters and 200 aircraft used for missions that generally occur beyond 50 miles from shore. Deepwater currently accounts for almost one-third of the Coast Guard’s acquisition staff and one-third of support personnel funding. Rather than using the traditional approach of replacing classes of ships or aircraft through a series of individual acquisitions, the Coast Guard chose to employ a “system-of-systems” acquisition strategy that would replace its aging deepwater assets with a single, integrated package of new or modernized assets. The primary objectives of the Deepwater program are to maximize operational effectiveness and to minimize TOC while satisfying the needs of the customer—the operational commanders, aircraft pilots, cutter crews, maintenance personnel, and others who will use the assets. The Deepwater program has been in development for a number of years. Between 1998 and 2001, three industry teams competed to identify and provide the assets—aircraft, helicopters, cutters, logistics, and C4ISR—needed to transform the Coast Guard. In June 2002, the Coast Guard awarded a contract to ICGS as the system integrator for Deepwater. Under a 5-year base contract with five additional 5-year options, ICGS is responsible for designing, constructing, deploying, supporting, and integrating the Deepwater assets to meet Coast Guard requirements. ICGS, a business entity comprised of a nine-member Board of Directors, is jointly owned by Northrop Grumman and Lockheed Martin. Northrop Grumman and Lockheed Martin are the two first-tier subcontractors for the Deepwater program. They, in turn, provide Deepwater assets themselves or award second-tier subcontracts for the assets. In our 2001 report, we identified several areas of risk for Deepwater. First, the Coast Guard faced potential risk in the overall management and day- to-day administration of the contract. We reported on the major challenges in developing and implementing plans for establishing effective human capital practices, having key management and oversight processes and procedures in place, and tracking data to measure system integrator performance. In addition, we, as well as the Office of Management and Budget, expressed concerns about the potential lack of competition during the program’s later years and the reliance on a single system integrator for procuring the Deepwater equipment. We also reported there was little evidence that the Coast Guard had analyzed whether the approach carried any inherent risks for ensuring the best value to the government and, if so, what to do about them. Since fiscal year 2002, Congress has appropriated almost $1.5 billion for the Deepwater program (see table 1), and as of September 2003, the Coast Guard had obligated $596 million to ICGS and $120.4 million for government program management, legacy asset sustainment, and facilities design work. In response to congressional direction to assess the feasibility of accelerating the Deepwater program, the Coast Guard reported in March 2003 that it could accelerate the implementation schedule from 20 years to 10 years and that this acceleration would provide increased operational capability sooner to support maritime homeland security. On March 1, 2003, the Coast Guard became part of DHS. A December 22, 2003, acquisition decision memorandum from the DHS Investment Review Board’s (the Board) acting chairperson to the Coast Guard Commandant stated that the Deepwater program has been designated as a level 1 investment, meaning that it will be reviewed at the highest levels within the department. Further, because Deepwater interoperability within DHS and with the Department of Defense will be a major program challenge, the DHS Joint Requirements Council will be kept informed of Deepwater developments. While decisions as to specific assets or capabilities have been deferred to the Coast Guard acquisition executive, the Board will meet to discuss actual or projected changes to the program that affect cost, schedule, or performance. Noting that the Coast Guard has proposed accelerating the Deepwater program in fiscal year 2005, the Board directed the Coast Guard to ensure that risk management planning receives appropriate attention, that TOC is kept current, and that cost, schedule, and performance are monitored by measuring actual data against the baseline and projections to completion. Complex, performance-based contracts such as Deepwater require effective government oversight to ensure that the intended results are achieved and that taxpayer dollars are not wasted. Both Coast Guard and ICGS officials have acknowledged that an unusually large degree of collaboration and partnership between the government and the system integrator must be in place for the Deepwater acquisition to be successful. However, a year and a half into the program, the key management and oversight components needed to make the program effective have not been effectively implemented. Integrated product teams (IPT) are the Coast Guard’s primary tool for managing the program and overseeing the contractor, but these teams have struggled to collaborate effectively and accomplish their missions. While the Coast Guard has a Deepwater human capital plan in place to guide strategic planning for turnover among Deepwater personnel, the plan is not being followed and vacancies exist in key positions. Further, while it is still early in the program, the transition from existing Coast Guard assets to the new Deepwater assets has not been effectively communicated, a particular concern in light of schedule delays for some of the first assets to be delivered. Finally, a number of plans integral to the organization and management of the Deepwater program were finalized much later than anticipated. Appendix II contains additional information on these plans. IPTs are the Coast Guard’s primary tool for managing the Deepwater program and overseeing the system integrator. More than 30 of these teams, comprised of Coast Guard, ICGS, and subcontractor employees from Lockheed Martin and Northrop Grumman, are responsible for overall program planning and management, asset integration, and overseeing the delivery of specific Deepwater assets. However, the teams have struggled to effectively carry out their missions. Our prior work at the Department of Defense has shown that effective IPTs have (1) expertise to master different facets of product development, (2) responsibility for day-to-day decisions and product delivery, (3) key members who are either physically collocated or connected through virtual means to facilitate team cohesion and the ability to share information, and (4) control over their membership, with membership changes driven by each team’s need for different knowledge. The Deepwater program manager reported IPT performance shortcomings as an issue in 14 of the 16 monthly program assessment reports provided to us. The following comments, made in Deepwater management reports by Coast Guard officials involved on a number of different IPTs, convey the difficulties faced by the teams in the first year and a half of the program. Though the comments in table 2 are not exhaustive, they demonstrate that the Deepwater IPTs have not been effective. Based on our review of program reports, we identified four major issues that have impeded the effective performance of the IPTs. Lack of timely charters to vest IPTs with authority for decision making. Authority for day-to-day decisions—required for program success in meeting cost, schedule, and performance objectives—is vested in IPTs through charters; yet charters for most of the Deepwater IPTs were not developed in a timely manner. In fact, 27 of the 31 IPT charters were not approved until after the first year of the contract. More than merely a paperwork exercise, sound IPT charters are critical because they detail each team’s purpose, membership, performance goals, authority, responsibility, accountability, and relationships with other groups, resources, and schedules. Between June 2002 and June 2003, 20 delivery task orders, authorized for issuance by the contracting officer, were executed by IPTs that did not have charters in place. Similarly, we found that some sub-IPTs, which address specific issues at a subasset or component level, were operating on an ad hoc basis without charters. For example, a November 2002 management report states that sub-IPTs addressed numerous issues concerning requirements for the national security cutter, even though their charters were not approved until a year later. In addition, two other sub-IPTs were not chartered. Inadequate communication among members. The Coast Guard’s Deepwater program management plan has identified collocation of IPT members as a key program success factor, along with effective communications within and among teams. Face-to-face informal communication enhances information flow, cohesion, and understanding of other members’ roles—all of which help foster team unity and performance. Yet only 3 of the 31 operating IPTs are entirely collocated, meaning that every IPT member is in the same building. The IPTs responsible for assets frequently have members in multiple locations. For example, the logistics process and policy development IPT has members in 6 different locations. As noted in table 2, Coast Guard IPT members have raised geographic separation as an issue of concern. ICGS developed a Web-based system for government and contractor employees to regularly access and update technical information, training materials, and other program information, in part to mitigate the challenges of having team members in multiple locations. However, Coast Guard documents indicate that the system is not being updated or used effectively by IPTs. In fact, the Deepwater program executive officer reported that, while the system has great potential, it is a long way from becoming the virtual enterprise and collaborative environment required by the contractor’s statement of work. High turnover of IPT membership and understaffing. Most of the Deepwater IPTs have experienced membership turnover and staffing difficulties, resulting in a loss of team knowledge, overbooked schedules, and crisis management. In a few instances, such as the national security cutter and maritime patrol aircraft, even the IPT leadership has changed. Also, key system integrator officials serving on the management IPTs have left the company. Both the Chief Financial Officer and the President of ICGS left their positions during the latter half of 2003, and an additional six of the nine ICGS Board members have changed. In addition, Coast Guard and system integrator representatives have also been staffed on multiple IPTs, and, in many cases meetings were attended by fewer than 50 percent of IPT members. A December 2002 Coast Guard document summarizing various programmatic recommendations cited a contractor study that recommended individuals be assigned to IPTs on a full-time basis and that they not serve on more than two teams. However, as of December 2003, 15 individuals were serving on three or more IPTs. Insufficient training. The system integrator has had difficulty training IPT members in time to ensure that they could effectively carry out their duties, and program officials have referred to IPT training as deficient. IPT charters state that members must complete initial training before beginning team deliberations regarding execution of new contracts for Deepwater assets. IPT training is to address, among other issues, developing team goals and objectives, key processes, use of the Web-based system, and team rules of behavior. According to a Coast Guard evaluation report, IPT training was implemented late, which has contributed to a lack of effective collaboration among team members. The Coast Guard hired a consultant to survey IPT members concerning teams’ performance from July 2002 to September 2003. Three surveys consisted of questions about mission, team member cooperation, performance, communication, and integrated product and process development. The final report on the survey results highlighted the need for improved communications both within and among teams. Respondents were also concerned that workloads were too high. In our 2001 report, we noted that as the Deepwater program got off the ground, tough human capital challenges would need to be addressed. A critical challenge we raised was the need to recruit and train enough staff to manage and oversee the contract. To date, the Coast Guard has not funded the number of staff requested by the Deepwater program and has not adhered to the processes outlined in its human capital plan for addressing turnover of Deepwater officials. These staffing shortfalls have contributed to the problems IPT members have identified—such as the struggle to keep pace with the workload and the difficulties in making decisions due to inconsistent attendance at IPT meetings. Although the Deepwater program has identified the need for a total of 264 staff in fiscal year 2004, only 224 positions have been funded, and only 209 have been assigned to the program. The Coast Guard’s fiscal year 2004 funding for personnel was increased to $70 million; however, the Coast Guard did not request sufficient funds to fill the 40 positions that the Deepwater program identified as necessary. According to Coast Guard officials, $70 million is insufficient to fund their fiscal year 2004 personnel plan because they need $67 million of this amount just to fund current personnel levels. The assistant commandant has imposed a temporary hiring freeze and plans to monitor expenditures throughout the year to identify any available funding for additional positions. Although we asked, Coast Guard officials did not explain why they did not request sufficient funds to adequately staff the program. In addition, the Coast Guard has not adequately addressed the imminent departure of Coast Guard officials from the Deepwater program. Coast Guard officials will leave each year due to the normal rotational cycle of military members (every 3 to 4 years) and retirements. The Deepwater human capital plan sets a goal of a 95 percent or higher “fill rate” annually for both military and civilian positions and proposes using a “floating” training position that can be filled by replacement personnel reporting for duty a year before the departure of the military incumbents. This position is meant to ensure that incoming personnel receive acquisition training and on-the-job training with experienced Deepwater personnel. However, the 2004 request for this training position was not funded, nor was funding provided for additional new positions identified as critical. In December 2003, the Director of Resources and Metrics and the Chief Contracting Officer left the Deepwater program, and the program manager is slated to leave in March 2004. In addition, by July 1, 2004, five key Coast Guard officials who oversee the work of the asset IPTs are scheduled to leave. Coast Guard officials told us that they have identified the military replacements that will join the program in the summer of 2004. Although Deepwater is still in the early stages, assets will start to be delivered incrementally to operating units soon. The first Deepwater assets—the 123-foot cutter and short range prosecutor—are scheduled to be delivered to operating divisions in 2004. Operating units will receive additional ships, aircraft, or C4ISR every year thereafter until the Deepwater program ends. However, the Coast Guard has not communicated decisions on how the new and old assets are to be integrated during the transition and whether Coast Guard or contractor personnel—or both—will be responsible for maintenance. Coast Guard field personnel, including senior-level operators and naval engineering support command officials, told us that they have not received information about how they will be able to continue meeting their missions using current assets while being trained on the new assets. They are also unclear as to whether the system integrator or Coast Guard personnel will be responsible for maintenance of the new assets. For example, although Deepwater officials have stated that maintenance on the new assets will be a joint responsibility, naval engineering support command staff had received no instruction on how this joint responsibility is to be carried out. Coast Guard officials told us that guidance on joint maintenance responsibility has not been completely disseminated throughout the Coast Guard, but said that ICGS has recently added representatives at the key maintenance and logistics sites to coordinate maintenance issues. One of the first Deepwater assets to be delivered is the 123-foot cutter. The Coast Guard is modifying its 110-foot cutter by adding 13 feet of deck and hull, a stern ramp, a superstructure, and communication equipment. The 123-foot cutter is an example of the transition challenges facing the Coast Guard. First, there is confusion over which of the cutters will be modified and when. The contract with ICGS calls for all 49 cutters to be modified; however, Deepwater officials are considering curtailing the modification efforts and accelerating the development of the fast response cutter instead. (The fast response cutter was originally planned to be delivered in 2018 as a replacement for the 123-foot cutter). In addition, the Coast Guard identified 22 of the 110-foot cutters that, due to unexpectedly severe hull corrosion, required additional inspection and repair separate from the Deepwater modification plans. To date, $14.7 million in non-Deepwater funds has been made available to repair 8 of these cutters. Further, Coast Guard officials note that there are 4 cutters in operation in the Persian Gulf, which makes them unavailable for modification at this time. System integrator and Coast Guard officials expressed confusion about the status of the cutter modifications, hull repair program, and fast response cutter schedule. For example, ICGS officials indicated that they did not know what the Coast Guard plans for the 123-foot cutter modification. The Coast Guard is considering several options and has not made a final decision on the cutter modification effort. Transitioning the staffing and operations of current Coast Guard assets to Deepwater assets may be further complicated by schedule delays. Reliable information on the delivery of Deepwater assets is important to the planning and budgeting efforts of Coast Guard operators and maintenance personnel to ensure that current missions are met and existing assets are maintained. Delivery of the first 123-foot cutter and short range prosecutor is scheduled for March 2004, slipping from the original delivery date of November 2003, and the rescheduled date of December 2003. This delay is affecting the schedules for the remaining cutters under contract, according to the most recent program manager assessment. Program management reports also indicate that schedule milestones have slipped for the maritime patrol aircraft. The first two aircraft are currently scheduled to be delivered to operating divisions in late 2006 or early 2007, compared with the original plan of 2005. The IPT is working toward design of the aircraft, even though Coast Guard approval to proceed has not been set forth in the form of a definitized contract for this asset. The target date for definitizing the contract is now April 2004. According to Office of Federal Procurement Policy guidance, a performance-based contract such as Deepwater should have measurable performance standards and incentives to motivate contractor performance. Contractors should be rewarded for good performance based on measurement against predetermined performance standards. In general, the contractor is to meet the government’s performance objectives, at appropriate performance quality levels, and be rewarded for outstanding work. Further, sound internal controls are important to ensure that plans, methods, and procedures are in place to support performance-based management. Relevant information should be recorded and communicated to management and others in a form and a time frame that enables them to carry out their responsibilities. The Coast Guard’s process and procedures for evaluating the system integrator’s performance during the first year of the contract lacked rigor in terms of applying quantifiable metrics to assess performance, gathering input from government performance monitors, and communicating with and documenting information for the decision makers. The process used to hold the system integrator accountable for results was not transparent and, in fact, contained several inconsistencies that raise questions as to whether the Coast Guard’s decision to give the contractor 87 percent of the award fee was based on accurate information. The Coast Guard measures the system integrator’s ongoing performance based on periodic assessments using weighted evaluation factors. The award fee for the first year of performance of the overall integration and management of the Deepwater program was based on an evaluation of the following five factors: overall program management, cost monitoring and control, quality, innovation, and flexibility. These evaluation factors are further defined in the contract’s award fee plan. For example, innovation is the “extent to which innovation, designs, processes, and concepts have been introduced that result in operational performance improvements and/or total ownership cost reductions.” While there will inevitably be a degree of subjectivity in award fee decisions, the Coast Guard lacks quantifiable metrics to make an assessment of the contractor’s performance. Given the lack of specificity of the metrics, it is not clear how they could be used to make such an assessment, particularly on a program as complex as Deepwater. Coast Guard officials acknowledged that the factors need to be better-defined, with supporting metrics that would provide a more objective basis for future award fee assessments. In the meantime, a May 31, 2003, Coast Guard memorandum to ICGS indicates that the contractor will be rated based on three factors for the second year of performance rather than five. However, the factors are vague and undefined: quality, program management, and system engineering. Further, supporting metrics to measure these performance factors have not been developed. Under the Deepwater contract’s award fee plan and the program management plan, technical specialists, known as contracting officers’ technical representatives (COTR), are to provide their observations to a program evaluation board comprised of the contracting officer, the program manager, and two COTRs. The Deepwater program executive officer then makes the final award fee determination based on the board’s recommendations. The Coast Guard’s award fee evaluation of the first year of ICGS’s performance was based on unsupported calculations and relied heavily on subjective considerations. As a result, the basis for the final decision to provide the contractor an award fee rating of 87 percent, which falls in the “very good” range, was not well-supported. For example, while all COTRs submitted comments, the assessment did not include numerical and adjectival ratings from all COTRs. Input from the COTR responsible for gauging the system integrator’s performance for all efforts related to the design and delivery of ships was not included in the calculation at all. Prior to speaking with us, the COTR did not know his input was absent from final performance monitor calculations, which resulted in a recommended rating of 82.5 percent. Input from two other COTRs was provided for some but not all of the five evaluation factors. A fourth COTR provided only adjectival ratings, whereas others provided numerical scores. Subsequently, and unbeknownst to the COTR, a program evaluation board member calculated a numerical score for this COTR’s observations. While an adjectival rating of “good,” for example, could range from a score of 71 to 80, the board member scored each of the factors in the midrange. Scoring this COTR’s adjectival ratings in the low or high end of the range would have produced a different outcome. Program evaluation board officials were not aware of the inconsistencies in the calculations until we informed them. One program evaluation board member raised concerns that the board’s subsequent award fee recommendation of 90 percent was too high and that the assessment focused disproportionately on the system integrator’s performance in the last part of the year rather than its performance over the entire year. In addition, the program manager’s assessment stated that “overall program management…needs substantial improvement.” Further, Coast Guard management reports throughout the first year of the contract cited various schedule, performance, cost control, and contract administration problems that required attention. Among the assets cited as needing attention were the maritime patrol aircraft, the short range prosecutor, the 123-foot cutter, and the logistics integration management system. Ultimately, the program executive officer awarded the system integrator a rating of 87 percent, resulting in an award fee of $4.0 million of the maximum $4.6 million annual award fee. Coast Guard officials told us that they will now assess ICGS’s performance every 6 months, rather than annually. However, the contract has not been modified to reflect either the changes to the evaluation factors, discussed previously, or the new assessment period. The first 6-month assessment was scheduled for completion in December 2003, but as of March 2004, Coast Guard officials told us it is currently ongoing. The contractor was eligible for a second award fee of up to $1.5 million in August 2003 for performance related to the continuous improvement of elements common to C4ISR and life cycle and logistics engineering for all assets. Coast Guard officials said that they awarded the system integrator 79 percent of the maximum award fee; however, they did not provide us with supporting documentation of the award fee determination process. The Coast Guard is scheduled to decide on extending ICGS’s contract by June 2006, 1 year prior to the end of the first 5-year contract term. In 2001, the Coast Guard set a goal of developing measures, within a year after contract award, to conduct annual assessments of the system integrator’s progress toward achieving the three overarching goals of the Deepwater program: increased operational effectiveness, lower TOC, and customer satisfaction. However, the Coast Guard’s time frame for implementing metrics to gauge progress against these goals has slipped. Further, the baseline the Coast Guard is using to assess TOC will not provide the government with critical information it needs about the efficiencies of using the Deepwater approach. Therefore, the Coast Guard is not in a position to begin the decision-making process about whether or not to extend the contract past the 5-year base period. The time frame for the first review of the contractor’s performance against the Deepwater goals has slipped. It was originally rescheduled for 18 months after contract award (December 2003), 6 months later than planned. Deepwater officials told us that the performance review is currently ongoing and is expected to be completed in March 2004. While the Coast Guard has begun to develop models to measure the extent to which Deepwater is achieving increased operational effectiveness and reduced TOC, a decision has not yet been made as to which specific suite of models will be used. The former Deepwater chief contracting officer told us he anticipates that the metrics will be in place in year 4 of the contract, the same year the decision needs to be made to extend the contract. Other officials acknowledged that it is difficult to hold the contractor accountable for progress toward the goals this early in the program, but could not offer a projection as to when the operational effectiveness and TOC results would be forthcoming. Coast Guard officials noted the large degree of complexity involved in attempting to measure the system integrator’s progress toward the Deepwater goals. In previous work, we found that assessing improvements in operational effectiveness and TOC may be difficult because performance data may reflect factors that did not result from the contractor’s actions. Because the Deepwater program includes legacy assets, modified assets, and new assets, the line of accountability between the government and the system integrator is blurred. It is not always clear who is responsible between Coast Guard and ICGS for the change in performance or costs of Deepwater assets. Measuring the 123-foot cutter’s performance, for example, is complicated by the fact that ICGS is responsible for the new 13 feet of deck and hull and other modifications, while the engine and the other 110 feet of the deck and hull are the Coast Guard’s responsibility. Coast Guard officials said that they are measuring “operational performance,” such as the number of search and rescue, drug interdiction, and migrant interdiction missions carried out by the current assets. However, they could not explain how these measures will be used to assess ICGS’s progress toward improving operational effectiveness with Deepwater assets. The officials stated that the models they are using to measure operational performance for the various Coast Guard missions lack the fidelity to capture whether improvements may be due to Coast Guard or contractor actions, the capability of specific Deepwater assets, or even outside factors such as improved intelligence on drug smugglers. Program officials noted that it is difficult to hold the contractor accountable for operational effectiveness at this point, before Deepwater assets are delivered. Establishing a solid baseline against which to measure progress in lowering TOC is critical to holding the contractor accountable. However, the Coast Guard is using as the baseline ICGS’s own projected cost of $70.97 billion plus 10 percent (in fiscal year 2002 dollars). Therefore, the government will not have the TOC information it needs to make a contract extension decision. Measurement of ICGS’s cost as compared to its own cost proposal will tell the Coast Guard nothing about the efficiencies it may be getting using the Deepwater performance-based approach. Further, the baseline the Coast Guard is using has been significantly changed from that originally envisioned. The Deepwater program management plan, approved in December 2003, states that the estimated cost to replace individual Coast Guard assets under a traditional approach, (i.e., without the ICGS Deepwater “system of systems” solution), is to be the “upper limit for TOC” that the contractor should not exceed. The officials could not explain why the program management plan, which sets forth the overall framework for managing Deepwater, contains a different TOC baseline than the one they are using. Further, changes in such variables as fuel costs or cutters’ operating tempo could result in additional changes being made to the TOC baseline. Coast Guard officials explained that proposed changes to the baseline would be approved by the program executive officer on a case-by-case basis. However, the Coast Guard has not developed criteria for potential upward or downward adjustments to the baseline. The Coast Guard has only recently begun to address the contractor’s progress in meeting the third overall goal of Deepwater, customer satisfaction. A January 9, 2004, report indicates that the Coast Guard had not yet identified the metrics needed to measure this goal. As a start, on January 12, 2004, a survey was sent to 25 senior leaders and program managers. The Coast Guard decided to use the system integrator approach 6 years ago. In our previous work, we found that given the Coast Guard’s reliance on a single system integrator for the Deepwater program, the agency would be at serious risk if it decides not to extend the contract. Because ICGS proposed the specific assets that became the Deepwater solution, a decision not to extend the current contract would require a new Deepwater acquisition strategy to be developed. Exit strategies and other means to deal with potential poor performance by the system integrator are important to mitigate these program risks. However, the Coast Guard is just beginning an internal review of the system integrator’s plan to transition out of the program in the event such action would be necessary. Further, Deepwater program officials indicated that it is not realistic to believe the Coast Guard would switch system integrators at this point in the program. They stated that they viewed their relationship with the contractor as a partnership and are committed to making it work. Competition is a key component for controlling costs in the Deepwater program and a guiding principle for DHS’s major acquisitions. The benefits of competition may be viewed as sufficient in the contract’s early years because, for the initial 5-year contract period, prices proposed by ICGS for equipment and software were based on competitions held among various subcontractors. However, beyond the first 5-year term, the Coast Guard has no way to ensure competition is occurring because it does not have mechanisms in place to measure the extent of competition or to hold the system integrator accountable for steps taken to achieve competition. The acquisition structure of the Deepwater program is such that the two first-tier subcontractors, Lockheed Martin and Northrop Grumman—the companies that formed ICGS and that developed the Deepwater solution—have sole responsibility for determining whether to hold competitions for Deepwater assets or to provide these assets themselves. Over 40 percent of the funds obligated to Lockheed Martin and Northrop Grumman have either remained with those companies or been awarded to their subsidiaries. Further, the system integrator uses a Lockheed Martin sourcing document, termed the open business model, to guide competition decisions made by the subcontractors. However, this guidance is a philosophy—not a formal process involving specific actions—that encourages competition but does not require it. The lack of transparency into competition and the government’s lack of a mechanism to hold the contractor accountable raise questions about whether the Coast Guard will be able to control costs. Neither the Coast Guard nor the system integrator determines how suppliers for Deepwater assets are chosen. A Coast Guard official told us that the system integrator was hired to make these decisions because the agency lacked the expertise to do so. However, Lockheed Martin and Northrop Grumman, as the subcontractors, are solely responsible for deciding whether to hold competitions for Deepwater assets or provide them to the Coast Guard themselves (often referred to as “make or buy” decisions). Moreover, the Coast Guard has no contractual requirements with ICGS that provide transparency into significant make or buy decisions. Although the Coast Guard has decided to include achieving competition as one of the factors to be considered in decisions about extending the contract for future option terms, this review will occur after such subcontracting decisions are made. The subcontractors are not required to notify the Coast Guard prior to making a decision to provide Deepwater assets themselves rather than holding a competition. The Coast Guard’s review of competition included in its award term plan will not address Lockheed Martin or Northrop Grumman decisions—increasingly important in subsequent years—of whether significant equipment should be procured from outside sources or built in-house. As of September 30, 2003, the Coast Guard had awarded $596 million in orders to the system integrator, ICGS. Table 3 shows that over 98 percent of this amount was then passed through to the two first-tier subcontractors. To date, the subcontractors managing the acquisition have frequently performed the work themselves. Based on their respective work scopes, the two companies either issue orders to second-tier subcontractors or retain the work for themselves. Table 4 shows that, as of September 30, 2003, Lockheed Martin planned to retain 42 percent of its obligated dollars and to award 58 percent to second-tier subcontractors. Most of these second-tier dollars will go to major subcontractors, i.e., those with obligations greater than $5 million. As shown in table 5, Northrop Grumman planned to retain 51 percent of its obligated dollars. The open business model, meant to guide the supplier sourcing process in the Deepwater program, has been characterized by the system integrator as a means of ensuring competition for Deepwater assets throughout the life of the program, thereby keeping costs under control. In October 2003, ICGS issued a policy statement on the open business model. The stated business approach of the guidance is to encourage second-tier suppliers to remain innovative and competitive by directing Lockheed Martin and Northrop Grumman, as the first-tier subcontractors, to (1) generally avoid the use of teaming agreements with suppliers and prohibit teaming agreements based on guaranteed work share, (2) defer second-tier supplier decisions as long as practicable so that changes in the marketplace can be considered, and (3) actively solicit market information and new suppliers. However, this guidance is a philosophy—not a formal process involving specific decision points—that does not ensure that competition will be considered. The December 2003 Deepwater performance measurement plan requests that the contractor prepare self- assessments of its efforts to promote competition. However, the Coast Guard has no means of obtaining insight into the basis for the contractor’s self-assessments. Moreover, the government still lacks a mechanism to hold the contractor accountable for ensuring that competition occurs. To date, there have been varying degrees of competition for the second- tier subcontracting relationships Lockheed Martin and Northrop Grumman have in place for the design, development, or production of Deepwater assets. Lockheed Martin and Northrop Grumman follow their own procurement procedures and guidance for determining whether competition will occur and selecting the suppliers who will be invited to compete for Deepwater assets. The competitions are not “full and open” in the way a typical government procurement would be, nor are they required to be. The federal procurement system requires “full and open” competition except in cases where certain statutory exceptions are met. “Full and open” competition means that all responsible sources are permitted to compete. ICGS officials identified four specific assets for which they believe the open business model philosophy was effective: the conversion of 110-foot to 123-foot cutters, the national security cutter, the maritime patrol aircraft, and the vertical take-off and landing unmanned aerial vehicle (VUAV). We found that, in some cases, teaming agreements were implemented in the proposal phase of Deepwater and were carried over when ICGS won the contract. In other cases, some degree of competition had occurred. In December 1998, Lockheed Martin Corporation, Ingalls Shipbuilding, Inc., and Halter-Bollinger Joint Venture, L.L.C entered into a teaming agreement that included the 123-foot cutter modification. The agreement was established to make the capabilities of both Halter and Bollinger available to Lockheed Martin for all phases of the Deepwater program. Despite the open business model’s prohibition of work share agreements, such an agreement is in place between Lockheed Martin and Halter-Bollinger. Halter-Bollinger will be responsible for the design and construction of all vessels equal to or less than 200 feet in overall length, with the exception of the national security cutter. For those ships greater than 200 feet and less than 241 feet, the company’s work share is 25 percent of the total effort. The national security cutters are being designed and constructed by Northrop Grumman. Northrop Grumman awarded a contract to M. Rosenblatt & Son, Inc. for the cutters’ preliminary design, but Northrop Grumman is responsible for the detailed design and construction. Lockheed Martin is responsible for the electronics. However, Northrop Grumman plans to hold competitions for the long-lead materials—such as the gas turbine, bulkhead seals, stern tubes, and rudder—for the cutters and has solicited pricing proposals from a number of subcontractors. Prior to the contract award to ICGS, Lockheed Martin solicited information from a number of companies for the maritime patrol aircraft, evaluating 16 aircraft proposals. In October 2000, Lockheed Martin signed a memorandum of understanding with CASA Aircraft USA, Inc. to provide an airframe for Deepwater and to help develop and market ICGS’s Deepwater proposal. After some Coast Guard officials expressed concern about the aircraft model that had been selected for Deepwater, ICGS was awarded a task order to pay for an evaluation of alternative aircraft. As a result of the evaluation, Lockheed Martin identified an alternative CASA aircraft to meet the Coast Guard’s maritime patrol aircraft mission. For the VUAV, Lockheed Martin conducted a competition between six models. Bell Helicopter, Inc. was initially identified as the solution. After ICGS submitted its Deepwater proposal to the Coast Guard, Lockheed Martin identified a potential Northrop Grumman product based on market research. However, after evaluation of this alternative, Lockheed Martin selected one of the Bell products. The Coast Guard has embarked on a major transformational effort using an acquisition strategy that allows a system integrator to identify the Deepwater assets and to manage the acquisition process, with subcontractors retaining authority for all make or buy decisions. Such a strategy carries inherent risks that must be mitigated by effective government oversight of the contractor. The Coast Guard faces a tough challenge in holding ICGS accountable for results, while facing the daunting prospect of starting over with a new approach should the contractor fail. Nevertheless, the integrity of the contractor oversight process must be enforced through such mechanisms as effective IPTs and a rigorous and transparent award fee determination process. Further, the Coast Guard must determine how to hold the contractor accountable for achieving the basic goals of the Deepwater program in order to position itself to make a contract extension decision. While there is no question that the success of Deepwater depends on an effective partnership between the government and the contractor, the Coast Guard must preserve its ability to change course if necessary. Solid baselines need to be developed so actual costs and operational effectiveness of the Deepwater assets can be accurately measured and reported. The current use of the contractor’s proposed costs, plus 10 percent, as the TOC baseline—rather than the estimated cost to replace the assets via a traditional procurement approach—is troublesome. Further, because the program management plan does not reflect the change to the TOC baseline, we question whether this decision was well thought-out and in the government’s best interest. In addition to contractor oversight, the Coast Guard has not invested the resources needed to ensure that its own personnel are trained and staffed in sufficient numbers to carry out their duties. The disconnect between the process outlined in the human capital plan for ensuring a smooth transition as military personnel rotate out of Deepwater and the current situation—where key Deepwater officials are leaving the program without a chance to adequately train their replacements—is cause for concern as the Deepwater program moves forward. It is unclear why the Coast Guard has not devoted adequate attention to human capital needs. In addition, although the first Deepwater assets are just starting to be delivered, the lack of a solid and well-developed transition plan from legacy to Deepwater assets is already causing problems, as evidenced by the 123- foot cutter modification difficulties. The schedule delays for several of the assets further highlight the need for more focus on the transition to Deepwater assets. The concerns we raised in 2001 about the Coast Guard’s ability to control costs in future years remain valid today. Without a mechanism to hold the system integrator accountable for ensuring adequate competition, the Coast Guard cannot be sure that competition will be used to guard against cost increases that could jeopardize the program. This situation is especially risky given the acquisition structure of Deepwater, whereby the subcontractors, not the system integrator or the Coast Guard, are responsible for determining whether competition will occur for Deepwater assets. We recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to take the following three actions to address Deepwater program management: In collaboration with the system integrator, take the necessary steps to make IPTs effective, including training IPT members in a timely manner, chartering the sub-IPTs, and making improvements to the electronic information system that would result in better information sharing among IPT members who are geographically dispersed. Follow the procedures outlined in the human capital plan to ensure that adequate staffing is in place and turnover among Deepwater personnel is proactively addressed. As Deepwater assets begin to be delivered to operational units, ensure that field operators and maintenance personnel are provided with timely information and training on how the transition will occur and how maintenance responsibilities are to be divided between system integrator and Coast Guard personnel. Further, we recommend that the Secretary direct the Commandant to take the following six actions to improve contractor accountability: Develop and adhere to measurable award fee criteria consistent with the Office of Federal Procurement Policy’s guidance. In all future award fee assessments, ensure that the input of COTRs is considered and set forth in a more rigorous manner. Hold the system integrator accountable in future award fee determinations for improving the effectiveness of IPTs. Based on the current schedule for delivery of Deepwater assets, establish a time frame for when the models and metrics will be in place with the appropriate degree of fidelity to be able to measure the contractor’s progress toward improving operational effectiveness. Establish a TOC baseline that can be used to measure whether the Deepwater acquisition approach is providing the government with increased efficiencies compared to what it would have cost without this approach. Establish criteria to determine when the TOC baseline should be adjusted and ensure that the reasons for any changes are documented. To facilitate controlling future costs through competition, we also recommend that the Secretary direct the Commandant to take the following two actions: Develop a comprehensive plan for holding the system integrator accountable for ensuring an adequate degree of competition among second-tier suppliers in future program years. This plan should include metrics to measure outcomes and consideration of how these outcomes will be taken into account in future award fee decisions. For subcontracts over $5 million awarded by ICGS to Lockheed Martin and Northrop Grumman, require Lockheed Martin and Northrop Grumman to notify the Coast Guard of a decision to perform the work themselves rather than contracting it out. The documentation should include an evaluation of the alternatives considered. DHS forwarded us the Coast Guard’s written comments on a draft of this report, which are reproduced in appendix I. The Coast Guard provided us with additional technical comments, which we incorporated as appropriate. In an e-mail sent subsequent to the written comments, the Coast Guard stated that it agreed with our recommendations. The Coast Guard noted that the agency is learning and evolving as the Deepwater program matures and pointed out that many aspects of the Deepwater program—working with a system integrator, employing IPTs across multiple acquisition domains, and using a performance-based strategy for such a long-term undertaking—are new to the Coast Guard. The agency agreed that, because the IPT structure is new to the Coast Guard, many adjustments must be made to improve the teams’ effectiveness. The Coast Guard clarified that the IPTs are, for the most part, contractor-led and that Coast Guard IPT members provide support and oversight. The focus of this report, however, is on the government’s ability to oversee and manage the contractor. Deepwater management documents assert, as we point out in our report, that IPTs are the Coast Guard’s primary tool for managing and overseeing the contractor. Regarding the award fee process, the Coast Guard stated that it has taken action to assimilate objective factors into future evaluations but expressed concern that our draft report may not have completely reflected the rigor that was applied in the first award fee decision. The Coast Guard states that “no input from any of the monitors was left out of the evaluation process.” While we revised our report to state that all COTRs submitted comments, the input from the COTR responsible for ships was not included in the numerical scores, which were then passed on to the fee determining official. Further, the Coast Guard said that the score of 87 percent is “much lower than industry averages.” In our view, however, the relevant consideration in determining the award fee amount is not industry averages, but rather the purpose an award fee is intended to serve. The rationale for offering award fees is to motivate superior effort on specific task and delivery orders, assets, or system performance attributes. Of importance here, the narrative description in the Deepwater award fee plan associated with a score of 87 percent (“very good”) is “very effective performance, fully responsive to contract requirements . . . only minor deficiencies.” As we state in our report, program management reports throughout the first year of the contract cited various schedule, performance, cost control, and contract administration problems that required attention. The Coast Guard agreed that competition is critical to controlling costs and indicated that it is planning efforts that will result in greater visibility and increased accountability to ensure competitive practices are being used to manage costs. To determine the steps taken by the Coast Guard to manage the Deepwater program and oversee system integrator performance, we examined the Deepwater contract, the program management plan, the human capital plan, briefings, budget justifications, and monthly and quarterly management reports. We analyzed IPT charters, membership lists, survey data, and staffing data, and we observed IPT and working group meetings. We interviewed various Deepwater program officials representing the Coast Guard, the system integrator, and the subcontractors, including program and asset-level program managers, contracting officers, and ICGS representatives in Arlington, Virginia, and Washington, D.C.. We visited the First and Seventh Coast Guard Districts in Boston, Massachusetts, and Miami, Florida, and interviewed operators and systems specialists for Coast Guard cutters, aircraft, and helicopters at those locations. We also met with Lockheed Martin and Northrop Grumman employees in Avondale, Louisiana, and Moorestown, New Jersey. We reviewed our prior reports and testimonies on the Deepwater project and integrated product teams. To assess Coast Guard efforts to establish effective criteria to assess and reward the system integrator’s performance after the first year of the contract, we reviewed the award fee plan, the performance incentives plan, the interim and final award fee reports for the first year of contract performance, and other management documents. We interviewed Coast Guard and ICGS officials. Our analysis of this issue was hindered by the Coast Guard’s failure to provide us with two additional award fee determinations, despite our repeated requests. To assess whether the Coast Guard has put in place measures to assess the contractor’s progress in meeting the three overarching goals of Deepwater, we reviewed the performance measurement plan, the award term plan, and other performance measurement documents. Additionally, we interviewed the Deepwater program’s Resources and Metrics staff, Coast Guard operations personnel, and program managers. We also reviewed our prior report on performance-based contracting attributes. To determine whether the Coast Guard is addressing the role and extent of competition for Deepwater assets, we examined ICGS’s open business model policy statement and excerpts from Lockheed Martin’s procurement manual and Northrop Grumman’s acquisition policy manual. We discussed the open business model with officials from the Coast Guard, ICGS, Lockheed Martin, and Northrop Grumman. In addition, we reviewed financial data, including contract orders and ICGS spreadsheets. The Coast Guard provided us with the obligations to ICGS, Lockheed Martin, and Northrop Grumman. We did not independently verify the financial data. We performed our work from May 2003 through February 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-4841 or Michele Mackin, Assistant Director, at (202) 512-4309. Other major contributors to this report were Penny Berrier, Ramona L. Burton, Christopher Galvin, Lucia DeMaio, Gary Middleton, and Ralph O. White Jr. Details responsibilities and processes to implement the contract Describes activities and processes to ensure funding is available to execute the program Establishes structure and method for identifying and managing risks, and developing and selecting options to mitigate risks “Soon after contract award” Outlines the processes and procedures used to implement the program’s quality assurance process 90 days after contract award (September 2002) | The Coast Guard's Deepwater program, the largest acquisition program in its history, involves modernizing or replacing ships, aircraft, and communications equipment. The Coast Guard awarded the Deepwater contract to Integrated Coast Guard Systems (ICGS) in June 2002. The Coast Guard estimates the program will cost $17 billion over a 30-year period. ICGS is a system integrator, with responsibility for identifying and delivering an integrated system of assets to meet the Coast Guard's missions. GAO was asked to assess whether the Coast Guard is effectively managing the Deepwater program and overseeing the contractor and to assess the implications of using the Deepwater contracting model on opportunities for competition. Over a year and a half into the Deepwater contract, the key components needed to manage the program and oversee the system integrator's performance have not been effectively implemented. Integrated product teams, the Coast Guard's primary tool for overseeing the system integrator, have struggled to effectively collaborate and accomplish their missions. They have been hampered by changing membership, understaffing, insufficient training, and inadequate communication among members. In addition, the Coast Guard has not adequately addressed the frequent turnover of personnel in the program and the transition from existing to Deepwater assets. The Coast Guard's assessment of the system integrator's performance in the first year of the contract lacked rigor. For example, comments from the technical specialist responsible for monitoring the design and delivery of ships were not included in the evaluation scores. Further, the factors that formed the basis for the award fee determination were unsupported by quantifiable metrics. Despite documented problems in schedule, performance, cost control, and contract administration, ICGS received a rating of 87 percent, resulting in an award fee of $4.0 million of the maximum $4.6 million annual award fee. Further, the Coast Guard has not yet begun to measure the system integrator's performance on the three overarching goals of the Deepwater program--operational effectiveness, total ownership cost, and customer satisfaction. Its original plan of measuring progress on an annual basis has slipped, and Coast Guard officials have not projected a time frame for when they will be able to hold the contractor accountable for progress against these goals. This information will be essential to the Coast Guard's decision about whether to extend ICGS's contract after the first 5 years. Competition is critical to controlling costs in the Deepwater program and a guiding principle of Department of Homeland Security acquisitions. Concerns about the Coast Guard's ability to rely on competition as a means to control future costs contributed to GAO's description of the Deepwater program in 2001 as "risky." Three years later, the Coast Guard has neither measured the extent of competition among suppliers of Deepwater assets nor held the system integrator accountable for taking steps to achieve competition. Deepwater's acquisition structure is such that the two first-tier subcontractors have sole responsibility for determining whether to hold competitions for assets or to provide these assets themselves. The Coast Guard has taken a hands-off approach to "make or buy" decisions made at the subcontractor level. As a result, questions remain about whether the government will be able to control costs. |
To obtain a patent, inventors—or more usually their attorneys or agents— submit an application to USPTO that fully discloses and clearly describes one or more distinct innovative features of the proposed invention and pay a filing fee to begin the examination process. USPTO evaluates the application for completeness, classifies it by the type of patent and the technology involved, and assigns it for review to one of its operational units, called technology centers, that specializes in specific areas of science and engineering. Supervisors in each technology center then assign the application to a patent examiner for further review to determine if a patent is warranted. In making this determination, patent examiners must meet two specific milestones in the patent examination process: first actions and disposals. First action. At this milestone, patent examiners notify applicants about the patentability of their invention. After determining if the invention is new and useful, or a new and useful improvement on an existing process or machine, patentability is determined through a thorough investigation of information related to the subject matter of the patent application and already available before the date the application was submitted, called prior art. Prior art includes, but is not limited to, scientific publications and U.S. and international patents. Disposal. Patent examiners dispose of a patent application by determining, among other things, if a patent will be granted—called allowance—or not. Patent examiners receive credit, called counts, for each first action and disposal, and are assigned production goals on the basis of the number of production units—comprised of two counts—they are expected to achieve in a 2-week period. The counts in a production unit may be any combination of first actions and disposals. The production goals that are used today to measure patent examiner performance are based on the same assumptions that USPTO established in the 1970s. At that time, production goals were determined based on the belief that it should take a patent examiner a certain amount of time to review a patent application and achieve two counts based on their experience (as determined by their position in the agency) and the type of patent they are reviewing. As a result, these goals vary depending upon the patent examiner’s position based on the federal government’s general schedule pay scale (GS) and the technology center in which the patent examiner works. For example, a GS-12 patent examiner working on data processing applications is expected to achieve two counts in 31.6 hours, whereas a GS-12 patent examiner working on plastic molding applications is expected to do so in 20.1 hours. GS-7 patent examiners working on those types of applications, however, are expected to achieve two counts in 45.1 and 28.7 hours, respectively. Patent examiner achievements are recorded biweekly, and, at the end of each fiscal year, those patent applications that have not been reviewed for first action are counted as part of USPTO’s inventory of unexamined applications, otherwise known as the patent application backlog. In each of the last 5 years, USPTO has identified its annual hiring estimates primarily on the basis of available funding levels and its institutional capacity to train and supervise new patent examiners, and not on the basis of the number of patent examiners needed to reduce the existing backlog or review new patent applications. Although this process is consistent with workforce planning strategies established by the Office of Personnel Management (OPM) and has enabled the agency to better match its hiring estimates to its institutional capacity, USPTO’s ability to reduce the patent application backlog simply through its hiring efforts is unlikely. Specifically, USPTO begins the process of identifying projected hiring estimates as part of creating its budget submission for the Office of Management and Budget (OMB) 18 months before the start of the hiring year in order to meet OMB’s submission timeline. After considering expected funding levels and available patent examiner workforce data, USPTO considers its institutional capacity to supervise and train patent examiners. For example, in identifying its fiscal year 2002 hiring estimate, USPTO determined that funding availability would limit the number of patent examiners the agency could hire, and established its estimate on the basis of the number of patent examiners the agency had hired in the most recent year. However, in fiscal years 2003 through 2006, USPTO determined that funding would not be a limiting factor, and the agency’s hiring estimates were based primarily on its institutional capacity to supervise and train patent examiners. USPTO considers a number of factors in determining its institutional capacity to supervise and train new patent examiners. For example, it determines its supervisory capacity by considering the number of additional patent examiners who can be placed in a technology center. This number is limited by the number of supervisors available in each center who can sign patent application approvals and rejections and provide on-the-job-training for new patent examiners. Although new patent examiners can review the prior art relating to patent applications, only supervisors can authorize a new patent examiner’s decision to approve or reject a patent application. In an effort to avoid delays and inefficiencies in initial and final decisions on patent applications, the agency tries to ensure that the supervisor to patent examiner ratio is about 1 supervisor for every 12 patent examiners. Similarly, USPTO’s training capacity is determined by the number of patent examiners the agency believes it can train in a year. Training capacity was based on 2- or 3-week courses offered throughout the year and were led by supervisory patent examiners. The courses could accommodate about 16 patent examiners each, and in fiscal year 2004, according to USPTO, the agency offered about 28 training sessions. Because USPTO’s projected hiring estimates are established at least 18 months in advance of the hiring year, the agency continually refines the estimates to reflect changes that might occur during this period. For example, in 2002, when it created its budget submission to OMB, USPTO projected it would hire 750 patent examiners for fiscal year 2004. However, due to budget constraints, the agency actually hired 443 patent examiners in fiscal year 2004. Figure 1 shows USPTO’s projected and actual hiring numbers for fiscal years 2002 through 2006. The differences between projected hiring estimates and the number hired occurred primarily because of funding availability. In fiscal years 2003 and 2004, according to USPTO, the agency’s appropriations were significantly less than the agency’s budget requests. As a result, the agency could not financially support the number of new patent examiners it had initially planned to hire. In fiscal years 2005 and 2006, however, USPTO hired more patent examiners than originally planned because the agency’s appropriation for those years was greater than anticipated. The way in which USPTO identifies annual patent examiner hiring estimates is generally consistent with workforce planning strategies endorsed by OPM. For example, OPM recommends that agencies regularly track workforce trends to ensure updated models for meeting organizational needs; base decisions on sources of information such as past workforce data; and include in its workforce planning process a workforce analysis system that identifies current and future losses due to attrition. We found that USPTO generally followed these processes. Recognizing the need to increase its institutional capacity to hire more patent examiners, USPTO has taken steps to increase its training and supervisory capacity. To increase its training capacity, USPTO implemented an 8-month training program in fiscal year 2006 called the Patent Training Academy. According to USPTO, the academy provides the agency with a constant annual training capacity for 1,200 new patent examiners for each of the next 5 years. Moreover, USPTO officials believe that the academy may indirectly improve the agency’s supervisory capacity because new patent examiners should be better prepared to start work in a technology center and therefore will need less supervision and on-the-job training. USPTO plans to monitor new patent examiners after they have graduated from the academy to determine if the agency can use this approach to increase its institutional capacity and, therefore, its future annual hiring estimates. Even with its increased hiring estimates of 1,200 patent examiners each year for the next 5 years, USPTO’s patent application backlog is expected to increase to over 1.3 million at the end of fiscal year 2011. The agency has also estimated that if it were able to hire 2,000 patent examiners per year in fiscal year 2007 and each of the next 5 years, the backlog would continue to increase by about 260,000 applications, to 953,643 at the end of fiscal year 2011. Despite its recent increases in hiring, the agency has acknowledged that it cannot hire its way out of the backlog and is now focused on slowing the growth of the backlog instead of reducing it. Although USPTO is hiring as many new patent examiners as it has the annual funding and institutional capacity to support, attrition has continued to increase among patent examiners—one patent examiner has been lost for nearly every two hired over the last 5 years. For example, from the beginning of fiscal year 2002 through fiscal year 2006, USPTO hired 3,672 patent examiners. However, the patent examination workforce only increased by 1,644 because 1,643 patent examiners left the agency and 385 patent examiners were either transferred or promoted out of the position of patent examiner. As shown in figure 2, approximately 70 percent of the patent examiners who left the agency had been at USPTO for less than 5 years, and nearly 33 percent had been at the agency for less than 1 year. The attrition of patent examiners who were at the agency for less than 5 years is a significant loss for USPTO for a variety of reasons. First, attrition of these staff affects USPTO’s ability to reduce the patent application backlog because these less experienced patent examiners are primarily responsible for making the initial decisions on patent applications—the triggering event that removes applications from the backlog. Second, when these staff leave USPTO, the agency loses up to 5 years of training investment in them because patent examiners require 4 to 6 years of on-the-job experience before they become fully proficient in conducting patent application reviews. Third, the more experienced examiners who have the ability to examine more applications in less time have to instead devote more of their time to supervising and training the less experienced staff, thereby further reducing the agency’s overall productivity. Finally, these workforce losses reduce the pool of potential supervisory patent examiners for the future and therefore impair USPTO’s ability to increase its supervisory capacity and, ultimately, its hiring goals. We found that USPTO management and patent examiners disagree significantly on the reasons for the agency’s attrition. According to USPTO management, personal reasons are the primary reasons that cause patent examiners to leave the agency. Some of these reasons include the following: The nature of the work at USPTO does not fit with the preferred working styles of some patent examiners, such as those with engineering degrees who are looking for more “hands-on” experiences. Many patent examiners enter the workforce directly out of college and are looking to add USPTO to their resumes and move on to another job, rather than building a career at the agency, otherwise known as the “millennial problem.” Patent examiners may choose to leave the area, as opposed to choosing to leave the agency, because their spouse transfers to a position outside of the Washington, D.C., area; the cost of living is too high; or the competition is too high for entry into the Washington, D.C., area graduate and post graduate programs for those patent examiners who would like to pursue higher education. According to USPTO management, the agency has a number of ongoing efforts to help address these issues. For example, the agency is developing a recruitment tool to better assess applicant compatibility with the agency’s work environment; targeting midcareer professionals during the recruitment process; and considering the creation of offices located outside the Washington, D.C., area to provide lower cost-of-living alternatives for employees. While Patent Office Professional Association officials—the union that represents patent examiners—agreed that in some cases personal reasons may contribute to patent examiners leaving the agency, they believe that the unrealistic production goals that the agency sets for patent examiners is primarily responsible for attrition. Specifically, according to union officials unrealistic production goals have created a “sweat shop culture” within the agency that requires patent examiners to do more in less time and has therefore been a significant contributor to patent examiners’ decisions to leave USPTO. To call attention to this concern, in April 2007 the union joined the Staff Union of the European Patent Office and other international patent examiner organizations in a letter declaring that the pressures on patent examiners around the world have reached such a level that in the absence of serious measures, intellectual property worldwide would be at risk. The letter recommended, among other things, an increase in the time patent examiners have to review patent applications. Patent examiners who participated in our survey generally agreed with union officials. Specifically, approximately 67 percent of patent examiners, regardless of their tenure with the agency, said that the agency’s production goals were among the primary reasons they would consider leaving USPTO. Moreover, we estimated that 62 percent of patent examiners are very dissatisfied or generally dissatisfied with the time USPTO allots to achieve their production goals; and 50 percent of patent examiners are very dissatisfied or generally dissatisfied with how the agency calculates production goals. In addition, a number of respondents noted that the production goals are outdated, have not changed in 30 years, and some technologies for which they evaluate applications had not even been discovered at the time the agency’s production goals were set. Fifty-nine percent of patent examiners believed that the production system should be reevaluated, including altering the production goals to allow more time for patent examiners to conduct their reviews. We and others have reported in the past that the assumptions underlying the agency’s production goals were established over 30 years ago and have not since been adjusted to reflect changes in science and technology. Moreover, USPTO uses these production goals to establish its overall performance goals for patent examiners, such as the number of first actions to be completed in a given year. However, from 2002 through 2006, the agency missed its projections in 4 of the 5 years. Furthermore, according to our survey, patent examiners are discontented with the actions they have to take in order to meet their production goals. Specifically, 70 percent of patent examiners who participated in our survey reported working unpaid overtime to meet their production goals during the last year, some reporting working over 30 extra hours in a 2- week period. In addition, we estimated that 42 percent of patent examiners had to work while they were on paid annual leave in order to meet their production goals. The percentage of patent examiners working while on paid leave was significantly higher for those with longer tenure at the agency. We estimated that 18 percent of patent examiners who had been at USPTO from 2 to 12 months worked to meet their production goals while on paid leave, compared with 50 percent of patent examiners with over 5 years’ experience. As one respondent to our survey explained, “Vacation time means catch up time.” Another respondent summed up the situation as follows: “I know that the production goals are set to keep us motivated in order to help get over the backlog but if a majority of examiners cannot meet those goals without relying on unpaid overtime or annual leave then something is wrong with the system.” According to our survey results, 59 percent of patent examiners identified the amount of unpaid overtime that they have to put into meeting their production goals as a primary reason they would choose to leave USPTO, and 37 percent identified the amount of time they must work during paid leave in order to meet their goals as a primary reason to leave the agency. Even though the agency has not been able to meet its productivity goals for the last 4 years, this extensive amount of unpaid overtime patent examiners have to work in order to meet their production goals does not appear to be a concern for the agency. When we asked USPTO management about the agency’s policy for unpaid overtime to meet production goals, the Deputy Commissioner for Patent Operations told us, “As with many professionals who occasionally remain at work longer to make up for time during the day spent chatting or because they were less productive than intended, examiners may stay at the office (or remote location) longer than their scheduled tour of duty to work.” From 2002 to 2006, USPTO offered a number of different retention incentives and flexibilities, as table 1 shows. According to USPTO management officials, the three most effective retention incentives and flexibilities that they have offered are the special pay rates, the bonus structure, and opportunities to work from remote locations. More specifically: Special pay rate. In November 2006, USPTO received approval for an across-the-board special pay rate for patent examiners that can be more than 25 percent above federal salaries for comparable positions. For example, in 2007, a patent examiner at USPTO earning $47,610 would earn $37,640 in a similar position at another federal agency in the Washington, D.C., area. Bonus structure. The agency awards bonuses to patent examiners who exceed their production goals by at least 10 percent. For example, according to USPTO, in fiscal year 2006, 60 percent of eligible patent examiners who exceeded production goals by 10 percent or more received a bonus. As table 2 shows, USPTO awarded 4,645 bonuses to patent examiners that totaled over $10.6 million in fiscal year 2006. Opportunities to work from remote locations. In fiscal year 2006, approximately 20 percent of patent examiners participated in the agency’s telework program, which allows patent examiners to conduct some or all of their work away from their official duty station 1 or more days a week. In addition, when USPTO began a “hoteling” program in fiscal year 2006, approximately 10 percent of patent examiners participated in the program, which allows some patent examiners to work from an alternative location. According to the results of our survey, patent examiners generally agreed that compensation-related retention incentives and efforts to enhance the work environment were among the most important reasons they would choose to stay at USPTO, as table 3 shows. Current total pay (excluding benefits) The availability of the flexible work schedule program The availability of a hoteling program The availability of a teleworking program The recent implementation of a special pay rate increase The ability to be promoted to the next GS level The availability of the law school tuition program The availability of monetary awards Access to an on-site fitness center The availability of a transit subsidy program The availability of on-site child care The availability of flexible spending accounts (i.e., the program that allows you to pay for eligible out-of-pocket health care and dependent care expenses with pre-tax dollars) The availability of an on-site health unit Activities offered by the Work-Life Committee Despite USPTO’s efforts to hire more patent examiners annually and implement retention incentives and flexibilities over the last 5 years, the agency has had limited success in retaining new patent examiners. Because the agency’s production goals appear to be undermining USPTO’s efforts to hire and retain a qualified workforce, we recommended in 2007 that the agency comprehensively evaluate the assumptions it uses to establish patent examiner production goals and revise those assumptions as appropriate. The Department of Commerce agreed with our findings, conclusions, and recommendation and agreed that the agency’s hiring efforts are not sufficient to reduce the patent application backlog. It stated that USPTO is implementing initiatives to increase the productivity of the agency that will result in a more efficient and focused patent examination process. Once USPTO determines the effect of these initiatives on patent examiner productivity, it will reevaluate the assumptions used to establish patent examiner productions goals. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. For further information, please contact Robin M. Nazzaro at (202) 512-3841 or [email protected]. Other contributors to this statement include Vondalee R. Hunt, Assistant Director; Omari Norman; Jamie Roberts; Carol Herrnstadt Shulman, and Lisa Vojta. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Patent and Trademark Office (USPTO) helps protect U.S. competitiveness by granting patents for new ideas and innovations. Increases in the volume and complexity of patent applications have extended the time for processing them. Concerns continue about the agency's efforts to attract and retain qualified patent examiners who can meet the demand for patents and help reduce the growing backlog of unexamined patent applications. In 2007, GAO reported on (1) USPTO's process for making its annual hiring estimates and the relationship of these estimates to the patent application backlog; (2) the extent to which patent examiner hiring has been offset by attrition, and the factors that may contribute to this attrition; and (3) the extent to which USPTO's retention efforts align with examiners' reasons for staying with the agency. GAO recommended that USPTO comprehensively evaluate the assumptions it uses to establish its production goals. USPTO agreed to implement this recommendation once it determines the effect of recent initiatives designed to increase the productivity of the agency through a more efficient and focused patent examination process. This testimony is based on GAO's 2007 report, which was based in part on a survey of 1,420 patent examiners. See, GAO, U.S. Patent and Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog, GAO-07-1102 . USPTO primarily determined its annual hiring estimates on the basis of available funding levels and institutional capacity to train and supervise new patent examiners, and not on the basis of the number of patent examiners needed to reduce the existing backlog of patent applications or review new patent applications. USPTO's process for identifying its annual hiring estimates is generally consistent with accepted workforce planning strategies. However, because this approach does not consider how many examiners are needed to reduce the existing backlog or address the inflow of new applications, it is unlikely that the agency will be able to reduce the growing backlog simply through its hiring efforts. Although USPTO is hiring as many new patent examiners as its budget and institutional capacity will support, attrition is significantly offsetting the agency's hiring efforts, and agency management and patent examiners disagree about the causes of attrition. Specifically, from 2002 through 2006, one patent examiner left USPTO for nearly every two hired--70 percent of those who left had been at the agency for less than 5 years. This represents a significant loss to the agency because new patent examiners are primarily responsible for the actions that remove applications from the backlog. According to USPTO management, patent examiners primarily leave the agency because of personal reasons, such as finding that the job is not a good fit. In contrast, 67 percent of patent examiners identified the agency's production goals among the primary reasons they would consider leaving the agency. These goals are based on the number of applications patent examiners must complete during a 2-week period. However, the assumptions underlying these goals were established over 30 years ago and have not since been adjusted to reflect changes in the complexity of patent applications. Moreover, 70 percent of patent examiners reported working unpaid overtime during the past year in order to meet their production goals. The large percentage of examiners working overtime to meet production goals and who would choose to leave the agency because of these goals may indicate that these goals do not accurately reflect the time needed to review applications and are undermining USPTO's hiring efforts. The retention incentives and flexibilities USPTO has provided over the last 5 years generally align with the primary reasons patent examiners identified for staying with the agency. Between 2002 and 2006, USPTO used a variety of retention flexibilities, such as a special pay rate, performance bonuses, and a flexible work place to encourage patent examiners to stay with the agency. According to USPTO management, their most effective retention efforts were those related to compensation and an enhanced work environment. GAO's survey of patent examiners indicates that most patent examiners generally approved of USPTO's retention efforts, and ranked the agency's salary and other pay incentives as well as the flexible work schedule among the primary reasons for staying with the agency. |
Human trafficking generally involves the use of force, fraud, or coercion to enslave individuals in situations that are exploitative and often illegal and dangerous. Since 2001, the U.S. government has contributed approximately $447 million worldwide to foreign governments, NGOs, and international organizations—such as UNODC, ILO, and IOM—to combat human trafficking. U.S. agency and international organization projects generally aim to prevent trafficking, protect victims, and prosecute traffickers. Organizations have different requirements for monitoring and evaluating their antitrafficking projects. Although the crime of human trafficking can take different forms in different regions and countries around the world, most human trafficking cases follow a similar pattern—that is, traffickers use acquaintances or false advertisements to recruit men, women, and children in or near their homes, and then transfer them to and exploit them in another city, region, or country.According to the TVPA, victims of severe forms of trafficking include those who are recruited or transported for labor through the use of force, fraud, or coercion and for the purpose of subjecting them to involuntary servitude. The use of fraud, force, or coercion typically distinguishes human trafficking victims from individuals who are smuggled. In some cases, individuals may enter freely into agreement with and pay smugglers to help them cross international borders. After arriving at their destination, however, individuals who do not understand the destination country’s language or culture may be exploited by individuals who take advantage of their vulnerability. Traffickers control their victims’ living and working conditions by physically confining them, taking away their identity documents, and threatening their families. Traffickers also exploit victims’ fears that authorities will prosecute or deport them if they seek help. Victims may be forced to work in legal or illegal and often dangerous areas, including brothels, sweatshops, agricultural businesses, and people’s homes. Documented human trafficking cases have involved victims such as children forced to beg for money in cities, work in carpet shops, and participate in pornography and sex acts with adults; women held in slavery- like conditions as domestic servants, strip club dancers, and prostitutes; and men forced to perform work in the agricultural sector and on fishing vessels. To combat the diverse forms of global human trafficking, 5 U.S. departments, USAID, and at least 15 international organizations have provided assistance to governments and civil society organizations in more than 100 countries. Assistance generally aimed to enhance efforts to 1. prevent human trafficking through public awareness, outreach, education, and advocacy campaigns; 2. protect and assist victims by providing shelters as well as health, psychological, legal, and vocational services; and 3. investigate and prosecute human trafficking crimes by providing training and technical assistance for law enforcement officials, such as police, prosecutors, and judges. These categories of interrelated victim-centered assistance activities— prevention, protection, and prosecution—are commonly referred to as “the three p’s.” Each type of assistance is viewed as critical for reducing the incidence of human trafficking. Since 2001, U.S. government agencies have provided approximately $447 million in foreign assistance to international organizations, NGOs, and foreign governments to combat human trafficking (see table 1). Of the U.S. agencies, State, Labor, and USAID have provided the most funding to combat global human trafficking. As we have previously mentioned, U.S. agencies support antitrafficking projects implemented worldwide by various international organizations, which also receive funding from other donor governments and organizations. As shown in table 2, for UNODC, ILO, and IOM, total resources allocated to combating trafficking since 2000 totaled about $255 million. The U.S. government provided about $122 million to UNODC, ILO, and IOM to combat human trafficking, according to data provided by these organizations and State’s Office to Combat and Monitor Trafficking and Persons (G/TIP). In addition, according to UNICEF’s annual reports, between 2003 and 2005, UNICEF allocated more than $453 million to its worldwide child protection program, which includes projects to combat trafficking and the sexual exploitation of children. Some U.S. agencies and international organizations have provided assistance that, although not specifically related to human trafficking, may help to combat trafficking by reducing individuals’ vulnerability to becoming victims and by strengthening countries’ judicial systems. For example, USAID, the World Bank, the Asian Development Bank, and the Inter-American Development Bank have implemented projects to root out corruption, build host governments’ legal systems, and improve the economic conditions of vulnerable populations in developing countries. See appendix II for additional information on international organizations’ missions and antitrafficking activities. Monitoring and evaluation are important tools for managing projects and should be considered when designing projects. Performance measurement involves developing a logic framework to explain how an intervention is to achieve its intended outcomes. Monitoring helps agencies determine whether the project is meeting its goals, update and adjust interventions and activities as needed, and ensure that funds are used responsibly. Monitoring includes, among other things, the development of indicators that are linked as closely as possible to the variables identified in the logic framework. These indicators are to be used throughout implementation to assess whether the project is likely to achieve the desired results. Monitoring also involves the choice of baseline and target values for each indicator and includes plans for periodic performance reports and data quality reviews. Evaluation is needed to assess a project’s impact or effectiveness. Project evaluation involves the development of a methodology that will be used to assess the project’s impact and describes plans for collecting baseline, interim, and final data on project results. The TVPA requires the President’s Interagency Task Force to Monitor and Combat Trafficking in Persons to measure and evaluate the progress of the U.S. government’s efforts to combat trafficking. The Trafficking Victims Protection Reauthorization Act of 2003 (TVPRA 2003) includes a congressional finding that additional research is needed to fully understand the phenomenon of trafficking in persons and to determine the most effective strategies for combating trafficking. The TVPRA 2003 also requires an annual report from the U.S. Attorney General to Congress to provide information on U.S. government activities to combat trafficking in persons. In addition to these reports, Justice began preparing annual assessments of U.S. government activities to combat trafficking in persons in 2003. All six U.S. organizations’ guidelines also require implementers to monitor and report on project performance. In addition, Labor also requires independent midterm and final evaluations for all international antitrafficking projects. Global, regional, and country-level organizations recognize the importance of collaborating to effectively combat human trafficking. Although organizations involved in combating trafficking—governments, multilateral organizations, and donors—have implemented some practices to strengthen collaboration, to succeed they will need to overcome challenges that can impede collaboration, including varying levels of government commitment and capacity. At the global level, several UN organizations have acknowledged the importance of collaborating to effectively combat human trafficking. The UN Chief Executives Boardrecognized the challenges in countering human trafficking and proposed establishing an interagency mechanism to strengthen coordination in 2005. In July 2006, a UN Economic and Social Committee resolution requested that UNODC organize a meeting to coordinate the technical assistance that UN and other intergovernmental organizations provide. In December 2006, the UN General Assembly adopted a resolution recognizing that broad international cooperation between member states and intergovernmental and nongovernmental organizations is essential for effectively countering the threat of human trafficking, and underlined the importance of bilateral, subregional, and regional partnerships, initiatives, and actions. This resolution further encouraged member states to initiate and develop working-level contacts among countries of origin, transit, and destination, especially among police, prosecutors, and social authorities. Organizations at the regional level have also recognized the importance of collaboration. For example, in a September 2006 report, the Organization for Security and Cooperation in Europe’s (OSCE) antitrafficking unit noted that, since trafficking is a transnational crime, collaboration is crucial to enable transnational mechanisms of communication and cooperation among governments, law enforcement, judiciary, and NGOs. In addition, six countries in the Mekong subregion of Southeast Asia have called for strengthened cooperation to combat human trafficking under a process called the Coordinated Mekong Ministerial Initiative Against Trafficking (COMMIT). At the country level, donor governments and UN organizations have also recognized the need to establish coordination mechanisms to, among other things, share information and leverage the comparative advantages of each organization. The Paris Declaration of Aid Effectiveness establishes the importance of donor coordination, stating that excessive fragmentation of aid at the global, country, or sector level impairs its effectiveness. It further states that a pragmatic approach to the division of labor and burden sharing increases complementarity and can reduce costs. Through the declaration, donors committed to make use of their respective comparative advantage at a sector or country level by delegating authority to lead donors for the execution of projects, activities, and tasks. To draw on the collective strengths of the UN agencies and programs operating in a country, UN country teams and the host government develop a national analysis, called a common country assessment. They subsequently produce a national development assistance framework, which describes the collective UN response and the expected results to achieve national priorities. For U.S. government agencies, a 2002 presidential directive stated that strong coordination among agencies working on domestic and foreign policy is crucial. The directive called for departments and agencies to coordinate U.S. foreign assistance programs, including those that provide funding to governmental or nongovernmental organizations to combat trafficking in persons. State officials told us that this is done through the Senior Policy Operating Group (SPOG), which, among other activities, facilitates a review by SPOG programming agencies of each other’s grant proposals for antitrafficking projects. Organizations—governments, multilateral organizations, and NGOs—face several challenges in collaborating to combat human trafficking, including the following: Government commitment varies. State’s annual Trafficking in Persons Report, which analyzes and ranks foreign governments’ compliance with minimum standards to combat trafficking, as outlined in the TVPA, illustrates governments’ varied efforts to address the issue. For example, governments might not recognize trafficking as a problem. They may treat foreign trafficking victims as illegal immigrants and deport them back to their home countries, rather than protect them. They also may not recognize trafficking within their own borders as a problem. Moreover, some government officials are themselves involved in human trafficking. Government capacity varies. Governments with greater resources and more established institutions have a greater capacity to address trafficking than countries that are poorer or less stable. In addition, changes in government leadership and personnel, due to elections, coups, assassinations, or other events, may result in the loss of expertise in combating human trafficking. Furthermore, governments may put trafficking under the purview of ministries with limited authority, such as women’s ministries. Organizations that combat trafficking vary in perspective and may be in competition for limited funds. Combating trafficking involves organizations—at the international and country levels—with expertise in raising awareness, assisting child and adult victims, and investigating trafficking cases, among other areas. Organizations may view trafficking through their own mandates or viewpoints and may perceive each other not only as collaborators but also as competitors for scarce resources. As such, they may not share information, which could lead to duplication and waste of funds. Understanding of trafficking varies across languages and cultures. Countries approach trafficking in different ways. For example, some countries’ national antitrafficking strategies do not include men as potential trafficking victims; also, in some countries, parents in villages often sell their children to work as domestic servants in large cities. Furthermore, translation issues can complicate establishing a common understanding of what constitutes trafficking. For example, although the legal term in Spanish for human trafficking is “trata de personas,” English speakers have translated human trafficking into Spanish as “tráfico de personas,” which in fact means human smuggling. Organizations at the global level have defined a common outcome—to end human trafficking and slavery—and have recently initiated various efforts to strengthen collaboration. In March 2007, UNODC launched the Global Initiative to Fight Human Trafficking to generate political will, an action plan, and financial resources to combat trafficking worldwide. The steering committee of the initiative includes the six leading international organizations involved in combating trafficking in persons: UNODC, ILO, IOM, UNICEF, OSCE, and the Office of the United Nations High Commissioner for Human Rights. UNODC also created the Interagency Cooperation Group Against Trafficking in Persons in September 2006 with the intended outcome of improving coordination between UN agencies and other international organizations, and to facilitate a holistic approach to preventing and combating human trafficking. The group’s functions include exchanging information and promoting the effective and efficient use of resources. Most existing regional coordination efforts that we reviewed have defined a common outcome and established action plans for carrying out antitrafficking activities. These efforts vary regarding whether they are specifically directed against trafficking or are included in discussions on migration or smuggling. In addition, they vary in number of participating members from as few as 6 countries in the Mekong subregion to as many as 56 countries in Europe, Asia, and North America. For example, the governments involved in a UN project that specifically focused on strengthening cooperation to combat trafficking in the Mekong subregion have agreed to undertake 18 actions to combat human trafficking, as part of an effort called the COMMIT process. Officials stated that this agreement strengthens these countries’ incentives to undertake the actions through mutual accountability. In addition, COMMIT developed an action plan that groups existing activities into a single framework delineating roles and responsibilities of UN organizations, implementing partners, and governments in the six Mekong countries. Officials contrasted the COMMIT process, which includes a small group of countries in one region with interconnected trafficking problems, to larger organizations such as the Bali Process, which approaches trafficking issues in conjunction with issues related to smuggling in the Asia-Pacific region and has a larger geographic scope, consisting of 38 governments across several regions. These officials stated that having a larger geographic scope makes it more difficult for the governments of these countries to hold each other accountable for implementing the regional action plan. In addition to these examples, OSCE initiated the Alliance against Trafficking in Persons, which established a partnership with major actors working to combat human trafficking to, among other things, develop joint strategies and provide OSCE participating states and others with harmonized decision-making aids. In Southeastern Europe, the International Center for Migration Policy Development has begun a project to facilitate the creation of a transnational referral mechanism by 10 governments in the region and national NGOs to improve transnational case management and victim protection. Furthermore, in the western hemisphere, the Organization of American States (OAS) held a conference in 2006 that resulted in guidelines for the 34 member states and OAS to combat human trafficking. Organizations continue to face obstacles as they work on global and regional initiatives to collaborate to combat trafficking. For example, governments disagree on whether there is a difference between “forced” and “voluntary” prostitution. Such disagreements can hinder collaborative efforts in combating sex trafficking. These global and regional initiatives may eventually implement additional practices to enhance collaboration, but it is too early to determine the success of the current initiatives. Although organizations—including host governments, UN organizations, U.S. government agencies, and other donor governments—in the three countries we visited have implemented practices to strengthen collaboration in combating trafficking, they continue to face challenges. None of the three countries has mechanisms to coordinate the efforts of all organizations involved in combating trafficking. Host governments bear ultimate responsibility for combating trafficking within their borders, and governments of the countries we visited have taken some steps to collaborate. For example, the Indonesian and Thai governments have passed national antitrafficking laws and enacted national action plans that define common outcomes, outline strategies, and assign roles and responsibilities. However, neither government’s action plan includes trafficking of men in its definition of human trafficking. Officials in Indonesia stated that they expect to revise the plan to include the trafficking of men, as well as women and children, based on legislation passed in 2007. Although both the Indonesian and Thai governments hold interagency meetings, the ministries responsible for coordinating antitrafficking efforts have limited authority and operational capacity, according to officials we interviewed. Unlike Indonesia and Thailand, Mexico has neither a national antitrafficking law nor an action plan. Mexican officials stated that they convened one interagency meeting on human trafficking and plan to institute additional coordination mechanisms after the government passes a national antitrafficking law. In Indonesia and Thailand, UN organizations, in conjunction with host governments, have developed country assessments and assistance frameworks to articulate overall development goals—including combating trafficking—joint strategies, and roles and responsibilities. UN organizations working on trafficking in Thailand also meet as part of the Mekong regional project that we previously discussed; although, according to one donor government official, attendance among UN organizations is sporadic. UN officials in Indonesia told us that they share information informally, but do not meet on a regular basis to discuss trafficking. According to U.S. government officials in Indonesia and Thailand, donor governments have made sporadic and informal efforts to leverage resources and avoid duplication of effort. For example, as a result of informal coordination, Justice and French government officials worked together on a criminal justice training project in Indonesia. In Thailand, according to a U.S. official, although most of the major donors attend the Mekong regional project’s meetings, these meetings are generally focused on UN activities, with little opportunity for donors to coordinate. Officials in Indonesia and Thailand acknowledged the need to establish regular bilateral donor coordination efforts on trafficking issues and, at the time of our fieldwork, had begun discussing the establishment of regular coordination meetings. While U.S. government agencies overseas have developed some collaboration mechanisms to combat trafficking, they continue to face challenges in coordinating their efforts. The U.S. embassies in the three countries we visited, Indonesia, Thailand, and Mexico, include trafficking in persons in their mission performance plans, which establish combating trafficking as a component of the U.S. government’s overall strategy in each country. U.S. officials in these countries told us they organize trafficking- specific meetings that include U.S. government agencies and their implementing partners, primarily to share information. The U.S. government officials who were involved in antitrafficking issues were responsible for other issues as well. In Indonesia, for example, the primary U.S. embassy contact for trafficking was also temporarily assigned deputy chief of mission in East Timor. In Mexico, one agency official, whose responsibilities consisted solely of human trafficking issues, had been named to also serve as coordinator and primary contact for all U.S. antitrafficking efforts. After this official's departure in December 2006, her portfolio was added to the existing portfolio of another official from the same U.S. agency. At the time of our visit in February 2007, State, USAID, and DHS officials, who also covered other issues, were holding meetings to coordinate their antitrafficking efforts internally. However, a new U.S. antitrafficking coordinator for Mexico had not been designated. Consequently, some Mexican government officials expressed uncertainty about which U.S. agency official had the role of lead U.S. government coordinator for trafficking in Mexico. Furthermore, some U.S. government officials noted challenges in coordinating between officials in Washington, D.C., and those overseas. For example, a U.S. official in Thailand was unaware of a U.S.-funded project in the country until that project hosted a conference and requested additional funding from the U.S. embassy. The project had not been included in the list of antitrafficking activities in Thailand that the official received from State’s trafficking office, which is based in Washington. According to some agency officials overseas, not knowing what other agencies and bureaus plan to spend on antitrafficking activities in a particular country makes it difficult for them to determine how much of their budgets to allocate to antitrafficking activities. Although State’s new Office of Foreign Assistance has begun to address the issue of better coordinating all U.S. foreign assistance by bringing together core State and USAID teams to discuss U.S. development priorities in each recipient country, some U.S. officials expressed uncertainty regarding which part of the U.S. government would be responsible for outlining a new country-level strategy and budget for combating trafficking. Antitrafficking project documents we reviewed generally include monitoring elements, such as an overarching goal and related activities; however, they often lack other monitoring elements, such as targets for measuring performance. Various other factors also make it difficult to evaluate the impact of antitrafficking projects. These factors include questionable estimates of the number of trafficking victims at the project level, which are needed to evaluate the effectiveness of specific antitrafficking interventions. Certain project elements may further impede evaluations, including short time frames and overly broad objectives. Because of these factors, few evaluations that determine impact have been completed. As a result, little is known about the impact of antitrafficking interventions. Most of the project documents we reviewed generally include one or more monitoring elements, but lack others. We reviewed documents for 23 U.S. government-funded antitrafficking projects in Indonesia, Thailand, and Mexico, which generally include statements of project goals and a description of activities. However, the majority of these documents lack a logic framework that clearly links activities with project-level goals, indicators, and targets. Specifically: Eighteen of the 23 projects do not clearly explain how activities will achieve stated goals. For example, the project proposal for a State- funded project in Thailand does not provide clear linkages demonstrating how the training of local officials and screenings of a public awareness video will achieve the project’s overall goal of reducing trafficking among vulnerable adolescents and women. In contrast, all 4 Labor-funded projects in Indonesia, Mexico, and Thailand clearly link project goals and activities. For example, 1 goal of a project in Mexico is to develop a system to identify networks of exploiters. Activities to achieve this goal include developing computer software and training government officials in its use. Twenty-one of the 23 projects identify indicators, but of these, only 10 specify targets by which performance is measured. For example, a State-funded project in Thailand included the “number of victims rescued” and the “number of arrests of traffickers” as performance indicators, but did not set numerical targets for measuring performance. In contrast, a State-funded project implemented by IOM in Indonesia established an expected result that 500 victims of trafficking receive rehabilitation and reintegration assistance. For the 13 projects that did not specify targets, the performance standards to which grantees were held accountable were not clear. Of the 10 projects that did specify targets, 5 explained how targets were set. These 5 projects included all 4 of the Labor projects we reviewed. Another element of monitoring project performance is to supplement implementing partners’ reporting of programmatic and financial progress with independent review through site visits at the field level. For example, Labor officials reported that the International Labor Affairs Bureau’s Office of Child Labor, Forced Labor, and Human Trafficking has engaged ILO’s external auditor to conduct audits of a sample of ILO’s international program for the elimination of child labor projects. This office also has contracted with a certified public accounting firm to conduct independent attestation engagements of its Education Initiative projects. Among the objectives of these audits and attestation engagements is an assessment of the accuracy and reliability of performance data from grantees’ progress reports. State G/TIP has 4 Washington, D.C.-based staff members who are responsible for overseeing projects in approximately 70 countries. G/TIP officials stated that because they cannot visit all of the projects, they rely on U.S. embassy staff to provide additional field-level oversight. However, the office has not established written guidance for conducting such oversight. During our fieldwork, an embassy official expressed frustration at the lack of clear guidance and procedures. Embassy officials also told us that they meet with project staff, but have other responsibilities that limit the time they can devote to overseeing antitrafficking projects. According to State’s Bureau of Population, Refugees,and Migration (PRM) officials, the bureau has only 3 staff to monitor and oversee its antitrafficking projects currently being implemented in over 20 countries, and these staff cannot conduct site visits to all projects. PRM staff also stated that they review progress reports submitted by project implementers, but do not have a database or system for compiling the information they receive from the field. To address some of these concerns, State officials told us that they are taking steps to strengthen monitoring, including developing new indicators and written guidance for monitoring antitrafficking projects. State G/TIP officials told us that they are developing a system of key indicators to better inform management decision making that would be consistently used for each of the three main types of antitrafficking programs— prevention, protection, and prosecution. State PRM and IOM, its key implementing partner, have also been involved in an effort to develop and standardize an indicator framework across IOM missions, although this effort was not completed during the time of our review. Officials stated that this effort will serve as a way to exchange ideas about best practices in different parts of the world and will provide managers with information on implementation needed to make decisions about current or future projects. Various factors impede impact evaluations of antitrafficking projects. First, data on human trafficking are questionable, including estimates of the number of trafficking victims, making it difficult to determine a preproject baseline. Second, elements of project design, such as overly broad objectives or short project duration, diffuse potential impact. Because of these factors, it has been difficult to evaluate the effectiveness of antitrafficking interventions, and few evaluations that determine impact have been completed. As a result, little is known about the impact of antitrafficking interventions. The lack of accurate baseline estimates of the number of trafficking victims is a limitation to conducting evaluations. As we have previously reported, the accuracy of current estimates is questionable. Without estimates of the scope of human trafficking to use as baselines in project locations, it is very difficult to determine where interventions are most needed or where interventions would have the greatest impact. Developing baseline estimates is difficult for the following reasons: Victims are a hidden population. Victims may be unaware, unwilling, or unable to acknowledge that they are trafficking victims. Therefore, it is difficult to reach them to collect information using standard sampling techniques. Service providers may be unwilling to share victim data due to confidentiality concerns. For example, the global database maintained by IOM is not publicly available since assisted victims are in a precarious position and revealing their identity could have a detrimental effect on their safety. The definition of the term “trafficking in persons” is broad and varies in meaning across different languages. As we previously discussed, the understanding of trafficking varies across languages and cultures. These variations in definition hinder the comparability of data across countries and organizations. There are no commonly agreed-upon criteria for identification of human trafficking victims. This lack of criteria hinders the ability to identify victims, create consistent statistical databases, and design analytical tools for surveys and estimates. Existing data may not be reliable. Developing countries, which are typically the countries of origin, have limited capacity for data collection, and their governments’ commitment to combating trafficking may be insufficient. Thus, sufficiently reliable data needed for estimating trafficking incidence may not be available. Elements in the design of certain antitrafficking projects, including a tendency to focus on very broad, high-level objectives across a diverse range of activities, diffuse projects’ potential impact and create challenges for evaluations. When projects focus on overly broad objectives and contain too many types of activities, their potential impact is diffused, which makes them difficult to evaluate. For example, some antitrafficking projects have very high-level goals or objectives, such as “creating an environment for effective action against trafficking,” or “strengthening the initiatives of government and others against human trafficking” and contain activities covering 2 or more of the 3 key types of antitrafficking interventions— prevention, protection, and prosecution. Of 153 U.S. international antitrafficking projects funded in fiscal year 2004, 2005, or 2006, 56 percent included 2 or more of these interventions and 29 percent included all 3 interventions. Activities included public awareness campaigns, victim assistance, and training for law enforcement officials. While projects funded with greater resources could lead to more noticeable longer-term changes, the impact of a shorter-term, smaller-scale intervention may be difficult to attribute and quantify. Antitrafficking projects vary significantly in terms of their time frames and funding levels. For example, the projects we reviewed varied in duration from 1 year to about 4 years, and projects’ funding levels varied from $15,000 to $6 million. Officials stated that some organizations, such as State G/TIP, tend to fund smaller, shorter-term projects, while Labor usually funds larger, longer-term projects. Overall, experts told us that organizations also have generally limited funding for impact evaluations and for research on the nature and scope of human trafficking. Experts stated that another factor impeding evaluations is that little attention has generally been devoted at the start of projects to evaluation design. For example, questions related to determining the control group, the type of design for impact evaluation, the data that would be collected, and the analytical methods that would be most suitable are often not addressed before implementation. Furthermore, experts stated that although projects target several groups of beneficiaries, it is generally unclear how they can be reached and to which group they would be compared. As a result, it is difficult to determine what would constitute successful project implementation. Because of the difficulties in evaluating antitrafficking projects, the few evaluations that have been completed are qualitative rather than quantitative, focus on process rather than impact, and rarely trace victims over time. The few evaluations completed used qualitative methods that are valuable for documenting victims’ perceptions and experiences, but, unlike quantitative methods, they cannot be used to generalize results to broader populations. For example, IOM’s Office of Inspector General has carried out impact evaluations of a select number of antitrafficking projects. Our review of these evaluations shows that they covered interventions in various parts of the world funded by different donors. All of the available evaluations applied qualitative methodologies, consisting of document reviews, site visits, interviews, and focus groups with stakeholders. In addition, in a USAID-funded assessment of child trafficking victims residing at a shelter in Nepal, an evaluator used qualitative methods, such as observations and interviews, to collect information on their daily lives, needs, and preparation for reintegration into the community. The evaluator concluded that skill-based training provided by the shelter does not adequately prepare the girls for their reintegration into their communities; however, this qualitative result could not be generalized to the national level. Completed evaluations also typically focused on process rather than impact. Process evaluations are an important tool for improving service delivery and assessing program effectiveness by assessing whether activities conform to program design. Impact evaluations assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. However, impact evaluations are an emerging field in antitrafficking interventions. For example, a process evaluation of a Labor-funded project aimed at reducing the victimization of minors who have been trafficked or are at risk of being trafficked in Thailand focused on the project’s planning, implementation, and management. This process evaluation was designed to monitor implementation and assess project outcomes. However, the evaluation did not ultimately assess the project’s impact in reducing the victimization of minors. Finally, our review identified few evaluations that track individual victims who have been reintegrated into their communities over time. For example, ILO is currently pilot-testing an application of a tracer study methodology in the context of trafficking prevention. In addition, according to Labor officials, in some cases the office has conducted follow-up studies that examine a sample of beneficiaries to document changes that have occurred. Furthermore, IOM officials noted that several of its missions provided statistical information to gauge the success of its victim reintegration efforts over time based on specific indicators. According to experts, such evaluations are costly and time-consuming. Tracking victims by country can also be difficult because country boundaries are permeable and many trafficking routes cross national borders. A GAO-convened panel of experts identified and discussed ways to address the factors that make it difficult to monitor and evaluate antitrafficking projects. Panelists suggested approaches to improve the monitoring and evaluation of antitrafficking projects, by improving information on the severity of human trafficking and addressing weaknesses in the design of antitrafficking projects. Panelists acknowledged that the lack of existing data on the nature and severity of human trafficking limits evaluations of antitrafficking projects, and suggested ways to improve information on the nature and severity of human trafficking. According to the panelists, researchers need to gather evidence to answer the following questions: What is the nature and severity of human trafficking? Panelists stated that it is necessary to understand the nature of trafficking in terms of its underlying conditions and the types of traffickers and victims. Trafficking is a multidimensional, complex problem that involves a wide range of victims; recruiters, brokers, and intermediaries; and abusive employers and sexual exploiters. Understanding the incentives of the people engaged in trafficking is an important first step. The severity of human trafficking can be measured by using qualitative and quantitative methodologies. Panel members suggested several sampling methods that have been used to sample other hard-to-reach populations, including the homeless, hidden migrants, missing and exploited children, domestic violence victims, inmates, and drug users. One suggested method is sampling of “hot spots”—an intensive search for victims in areas known to have high concentrations of victims or in areas to which many victims return. Other methods include adaptive cluster, double, indirect, and snowball sampling. (For a more detailed discussion of sampling methods, see app. III.) These methods could be used individually or in combination. Panelists further emphasized that, whenever feasible, it is important to use methodologies that are appropriate for the location sampled. In addition, it is critical to determine whether the results are unique to a certain location or whether they can be generalized to other locations. Panelists recommended that research start at the local level. Such research would identify successful small-scale interventions, increase the knowledge of victims’ needs, and develop meaningful performance measures. Lessons learned at the local level could then be expanded to national and regional levels. What is the projects’ estimated effect on the nature and severity of human trafficking? Panelists emphasized the necessity of designing rigorous methods to evaluate antitrafficking projects, such as randomized control trials. For example, as baseline estimates in trafficking hot spots are obtained and interventions are undertaken in that location to reduce trafficking, the use of “place-randomized trials” for evaluation would become possible. To determine the interventions’ impact, data obtained from interventions in hot spots could then be compared with locations where there had been no interventions. Although randomized trials may be difficult to execute for many trafficking projects, they are important ways to generate evidence about interventions’ effectiveness. Randomized trials should be pilot-tested in carefully chosen settings so that the evaluator can identify and correct any problems encountered before expanding the trials to larger populations or areas. Panelists pointed out that such rigorous evaluation has occurred in the fields of public health and criminal justice. For example, a randomized study was done of brothels in Thailand to obtain information on the HIV/AIDS rates of prostitutes. To address weaknesses in project design that impede monitoring and evaluation, such as projects with very broad, high-level objectives, panelists made the following recommendations: Develop a logic framework. Given the weaknesses of some antitrafficking projects that have overly broad objectives across a diverse range of activities, panelists suggested that officials design projects with a logic framework that has clear objectives and narrow the focus of interventions. They also recommended that officials design projects that clearly link activities to intended outcomes, identify measurable indicators, and establish procedures for setting and modifying targets. Measurable indicators with mutually agreed-upon targets allow project officials to assess how a project is achieving its overall goals and objectives. For example, a Labor-funded project implemented by ILO in Mexico includes a logic framework that links project activities and outputs to outcome objectives, and includes clear indicators and means of verification. The project’s overall goal of eliminating the commercial sexual exploitation of children in Mexico links to four immediate objectives that, in turn, link back to project activities and outputs. One objective—that at least 300 child victims of sexual exploitation or at-risk children and their families receive assistance—is linked to 28 activities and 8 outputs, such as increasing families' employment opportunities. These 28 activities include disseminating employment promotion programs, organizing employment training, and monitoring and analyzing the impact of these dissemination and training efforts. Panelists emphasized that donors and implementers should agree on the project’s logic framework during the design phase. Determine whether a project is ready to be evaluated. Given the significant variance in project duration and funding levels, panelists emphasized that evaluators would not be able to evaluate all existing projects, but should first determine which projects are ready to be evaluated. In conducting such “evaluability assessments,” evaluators determine, among other things, whether (1) the project is large enough, has sufficient resources, and has been implemented long enough to make an impact; (2) the project is reaching its target population; (3) project documents specify and clearly link objectives, goals, and activities; and (4) sufficient information exists to determine impact. For example, larger, long-term projects are more likely to have an impact and, thus, may be better candidates for evaluation than smaller, short- term projects. Because larger antitrafficking projects generally include a diverse range of interventions, panelists suggested narrowing the evaluation to focus on discrete interventions or aspects of the project. Build monitoring and evaluation into project design. Given that evaluation is not generally considered during design, panelists emphasized that project officials should consider how the project will be evaluated before the project is implemented. Most importantly, organizations need to define the project’s intended impact and how that impact will be measured. To do this, organizations would have to determine the project beneficiaries and to which group they would be compared, the data they would have to collect, and how they would analyze those data. Management would not only need to collect data during implementation to monitor if the project works according to plan, but also collect data before and after implementation to determine what works best. The United States has played an important role in combating global human trafficking and spurring other governments to increase their efforts in doing so. As organizations around the world increasingly collaborate in combating trafficking, their ultimate success will depend on the extent to which they are able to overcome difficult challenges, such as varying levels of government commitment and capacity, that have impeded collaboration in the past. More than 7 years after the passage of the UN protocol, little is known about which interventions have been the most effective in preventing human trafficking, protecting victims, and prosecuting traffickers. The United States and other governments, international organizations, and NGOs continue their efforts to fight trafficking, but there is little information available to inform their decisions about project implementation and selection. Although antitrafficking projects contain some important elements for monitoring, they often lack other important elements for measuring performance on a real-time basis, such as targets. Such elements are critical in monitoring project performance to determine whether interventions are being implemented as expected, or whether they need to be changed to better combat human trafficking. Evaluation is also important in determining whether antitrafficking projects have been effective. However, few impact evaluations have been completed due to the difficulties involved. As a result, little is known about the impact of antitrafficking interventions. Given the grave personal suffering of victims and negative impacts on society that human trafficking creates, strengthening collaboration, monitoring performance, and evaluating impact are important to ensure that organizations fund antitrafficking interventions with the greatest impact, where they are most needed, and through the effective and efficient use of resources. We recommend that the Secretaries of State and Labor and the Administrator of USAID improve the monitoring and evaluation of their projects to combat global human trafficking by considering the following actions, where appropriate: 1. Improve information about project impact on the nature and severity of human trafficking, including developing better data about the incidence of trafficking at the applying rigorous evaluation methodologies. 2. Address monitoring and evaluation weaknesses in the design of developing a framework that clearly links activities with project-level goals, indicators, and targets; conducting “evaluability assessments” to determine whether a project is ready to be evaluated; and building monitoring and evaluation into project design before the project is implemented. We are addressing our recommendations to State, Labor, and USAID because they provided the most U.S. funding for projects to combat global human trafficking. We requested comments on a draft of this report from the Secretaries of State, Justice, Health and Human Services, Homeland Security, and Labor; the Administrator of USAID; and cognizant officials at UNODC, IOM, ILO, OAS, OSCE, the World Bank, IDB, and ADB or their designees. We received written comments from State, Labor, USAID, and HHS, which are reprinted in appendixes IV, V, VI, and VII along with our responses to specific points. In its comment letter, State noted that it will implement the recommendations in the report in ways that are relevant and appropriate to the mandates of the State offices working on antitrafficking efforts. State further noted that the recommendations are entirely consistent with the department’s current activities and direction. State also generally agreed that the antitrafficking field is well-served by more information about the nature and severity of human trafficking and recognized that effective project design is critical to successful project implementation and program monitoring and can lay the foundation for evaluation. However, State disagreed with our finding that monitoring is limited, stating that monitoring is in place at the department and improving. While we recognize that State and other U.S. agencies have certain elements of monitoring in place, we report that they lack others. For example, we found that for the 23 antitrafficking projects we reviewed, the majority do not have a logic model that clearly explains how activities are linked to project goals. In addition, the majority of these projects do not specify targets that will establish benchmarks for measuring performance. State also emphasized that many of its projects are designed to be of more limited size, scope, and duration than those of other agencies, such as Labor. State further noted that these projects of limited duration are worthy of funding, but are not necessarily appropriate for evaluation. In the report, we state that the impact of a shorter-term, smaller-scale intervention may be difficult to attribute and quantify. Labor commented that the report provides a good overall assessment of international cooperation and the need to enhance collaboration among key agencies and governments regarding antitrafficking efforts. Labor also commented that the report highlights important areas for improving monitoring and evaluation of U.S.-funded antitrafficking programs. However, Labor stated that the report does not fully reflect efforts particular federal agencies are taking in the monitoring and evaluation of antitrafficking projects. As an example, Labor stated that it uses several mechanisms, including audits and process evaluations, in its monitoring and oversight of international technical assistance projects, including antitrafficking projects, to ensure that U.S. funds lead to planned outputs and results. In response, we made additions or revisions to the text to further clarify Labor’s monitoring and evaluation efforts. We believe the overall monitoring of antitrafficking projects is limited because the projects funded by the other five agencies did not have the elements of monitoring we found in Labor’s projects. USAID said it appreciates the thoughtfulness of GAO’s report. USAID also commented that it is concerned with the challenges of coordination, monitoring, and evaluation, and that while it has made considerable efforts to coordinate within the U.S. government and with other organizations, it will continue to work within the interagency process in Washington and in the field. USAID also agrees that monitoring and evaluation of its antitrafficking efforts are very important and that they rely on the availability of both human and financial resources. USAID further agrees that the issues inherent in the evaluation of antitrafficking activities are particularly challenging because there is no baseline against which to measure progress. HHS said the report is a sound document that substantively covers the wide range of programs and services available to combat human trafficking. We also received technical comments from Justice and DHS, as well as from UNODC, IOM, ILO, OAS, OSCE, the World Bank, IDB, and ADB, which we have incorporated in the report as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of State, Justice, Health and Human Services, Homeland Security, and Labor; the Administrator of USAID; ILO; IOM; and UNODC. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives were to examine (1) collaboration among organizations involved in international antitrafficking efforts, (2) U.S. government agencies’ monitoring of antitrafficking projects and difficulties in evaluating these projects, and (3) suggestions for strengthening monitoring and evaluation. To examine collaboration among organizations involved in international antitrafficking efforts, we reviewed relevant planning, funding, and project documents on human trafficking from the Departments of State, Justice, Labor, Homeland Security, and Health and Human Services and the U.S. Agency for International Development (USAID). We reviewed planning and project documents from relevant United Nations (UN) and other international agencies and offices, including the UN Office on Drugs and Crime (UNODC), International Labor Organization (ILO), International Organization for Migration (IOM), United Nations Children’s Fund (UNICEF), and UN High Commissioner for Refugees (UNHCR). We also reviewed UN reports and resolutions that address coordination as well as documents from regional organizations, such as the Organization for Security and Cooperation in Europe (OSCE), the Coordinated Mekong Ministerial Initiative Against Trafficking (COMMIT), and the Regional Conference on Migration. In addition, we reviewed documents from and discussed international antitrafficking efforts with officials from the above agencies and organizations as well as officials from host government, donor government, and nongovernmental organizations (NGO), in Washington, D.C., and during our fieldwork in Indonesia, Thailand, and Mexico. We also reviewed documents describing these countries’ national mechanisms to combat trafficking, including legislation and action plans, where applicable. We selected Thailand because it is the country with the largest number of international organizations working to combat human trafficking. We selected Indonesia and Mexico because they receive a large amount of U.S. funding for international antitrafficking projects and have a relatively large number of U.S. government agencies and international organizations working in each country. All 3 countries are origin, transit, and destination countries for human trafficking victims and have projects addressing prevention, protection, and prosecution. To examine organizations’ monitoring and evaluation of antitrafficking programs, we reviewed documentation from 23 projects in the 3 countries we visited and illustrative projects from other countries. We worked with U.S. agency officials in Washington and in the field to identify U.S.-funded antitrafficking projects in Indonesia, Thailand, and Mexico that were ongoing during the time of our review (August 2006 to June 2007). We requested project documents—including proposals, grant agreements, and progress reports—from these U.S. officials for projects identified in these countries. In response to this request, we received documents for 23 antitrafficking projects that were funded or implemented by 6 U.S. agencies involved in international antitrafficking efforts. We examined the project documents for the following elements: statement of goals or objectives, statement of activities, identification of indicators and targets, explanation of how targets were selected, and inclusion of a logic model or framework. While we recognize that these 23 projects may not be the complete universe of antitrafficking projects in these 3 countries, we consider these 23 projects sufficient for the purposes of our review. Although we cannot assume that the issues we identified exist across all projects, they nevertheless represent areas for improvement in monitoring antitrafficking projects. We also reviewed a set of 4 State Office to Combat and Monitor Trafficking in Persons (G/TIP) antitrafficking projects in India, Israel, Afghanistan, and Costa Rica. These projects were selected to illustrate some current monitoring and evaluation practices for existing projects in different parts of the world and different types of exploitation. They showed the variation in monitoring practices that can only increase with sample size. Thus, the projects provided the background information and context needed to understand State G/TIP’s current efforts to standardize monitoring requirements. The reviewed documents for all projects included proposals, applications for assistance, project descriptions, strategy papers, concept papers, causal models, cooperative or grant agreements, and periodic and final reports. We also reviewed articles and books on monitoring, evaluation, and statistics, such as Sampling by Steven K. Thompson, Wiley Series in Probability and Mathematical Statistics (1992); Adaptive Sampling by Stephen K. Thompson and George A.F., Seber, Wiley Series in Probability and Mathematical Statistics (1996); Better Evaluation for Evidence-Based Policy: Place Randomized Trials in Education, Criminology, Welfare, and Health by Robert Baruch, The Annals of the American Academy of Political and Social Science (2005), vol. 599, 6-18; Program Evaluation Methods: Measurement and Attribution of Program Results, Third Edition, Treasury Board of Canada, Secretariat; and Monitoring and Evaluation: Some Tools, Methods and Approaches, the World Bank (2004). To examine suggestions for strengthening monitoring and evaluation, we worked with the National Academy of Sciences to organize a 2-day expert panel on challenges and alternative strategies for monitoring and evaluating the results of international antitrafficking programs and projects in April 2007. We invited the following groups of panel participants: Experts with broad-based, subject-area knowledge of human trafficking. Experts with specialized knowledge of monitoring and evaluation of programs aimed at hidden populations similar to trafficking victims, such as the homeless and irregular migrants. This group included experts with specific knowledge of baseline data estimation of such populations. Panelists had backgrounds in academia, research, consulting, and project implementation in the field. Using a nominal group technique, panelists chose to focus on the intervention “safe return” as a starting point for discussion. Experts provided presentations and participated in a discussion and cross-fertilization of ideas. Using a nominal group technique, panel members also ranked two topics in order of importance to the human trafficking field—estimating the number of trafficking victims and evaluability assessments. None of the panel members were compensated for their work on this project. The following experts participated in the panel: Richard Berk, Professor of Criminology and Statistics, Department of Criminology, University of Pennsylvania Robert Boruch, University Trustee Chair Professor, Graduate School of Education and Statistics Department, Wharton School, University of Pennsylvania Mario Thomas Gaboury, Professor and Chair of Criminal Justice, Kristiina Kangaspunta, Chief, Anti-Human Trafficking Unit, United Nations Office on Drugs and Crime Jonathan Martens, Counter-Trafficking Project Specialist, Counter- Trafficking Division, International Organization for Migration Jeffrey S. Passel, Senior Research Associate, Pew Hispanic Center Lisa Rende-Taylor, Technical Advisor to the United Nations Inter-Agency Project on Human Trafficking in the Greater Mekong Subregion Peter Reuter, Professor, School of Public Policy, Department of Criminology, University of Maryland W. Courtland Robinson, Assistant Professor, Center for Refugee and Disaster Response, Johns Hopkins Bloomberg School of Public Health Debra Rog, Associate Director, Westat Corporation Jane Nady Sigmon, Senior Coordinator for International Programs, Office to Monitor and Combat Trafficking in Persons, U.S. Department of State We conducted our review from August 2006 to June 2007 in accordance with generally accepted government auditing standards. This appendix describes the general mission and antitrafficking activities of 15 international organizations that implement international antitrafficking projects. This appendix describes the sampling methods suggested by a GAO- convened panel of experts to estimate the number of human trafficking victims. The following are GAO’s comments on the Department of State letter dated July 13, 2007. 1. State said that the report’s title is not apt because program monitoring is in place and improving. We believe that monitoring is limited for the following reasons: for the 23 projects we reviewed, the majority did not have a logic model that clearly explains how activities are linked to project goals; the majority of these projects also did not specify targets that establish benchmarks for measuring performance; of the 10 projects that did specify targets, only 5 explained how targets were set; and, finally, State lacked written guidance for field-level oversight. 2. State noted that many of its projects are designed to be of more limited size, scope, and duration, and that such projects are not necessarily suitable for evaluation. They also added that larger and longer-term projects are more suitable candidates for impact evaluation than shorter projects. Finally, State commented that a relatively short time frame for projects is not a design weakness. In the report, we state that the impact of a shorter-term, smaller-scale intervention may be difficult to attribute and quantify. 3. State noted that baseline data are not required for all interventions. We disagree. In the report, we state that baseline and target values of indicators are needed to assess project performance. For example, baselines for a victim assistance program could include the number of victims served or assisted, or the number of vocational training sessions held. Moreover, project-level estimates are needed for baselines by which to evaluate how effectively interventions are reducing trafficking. For this type of baseline, panelists suggested methods for baseline estimation, such as gathering data in trafficking hot spots, that would allow for more rigorous impact evaluations. The following are GAO’s comments on the Department of Labor letter dated July 13, 2007. 1. Labor noted that federal agencies are carrying out monitoring and evaluation activities or are making efforts to improve monitoring and evaluation of antitrafficking activities. Labor also provided details of its monitoring policies, procedures, and activities. We recognize that Labor has monitoring procedures and activities in place, and we revised text in several locations to provide further clarification on Labor’s activities. We clarified that all four Labor projects in Indonesia, Mexico, and Thailand clearly link goals and activities, specify targets, and explain how targets were set. We added language stating that Labor engaged ILO’s external auditor to conduct audits of a sample of ILO’s projects to eliminate child labor and contracted with a certified public accounting firm to conduct independent attestation engagements of its Education Initiative projects. We believe the overall monitoring of antitrafficking projects is limited because the projects funded by the other five agencies did not have the elements of monitoring we found in Labor’s projects. 2. Labor stated that it requires midterm and final evaluations for its technical assistance projects, and that, in some cases, it engages in longer-term follow-up studies. We revised the text to further clarify and recognize Labor’s evaluation requirements and activities. In addition, we added text specifying that Labor funds follow-up studies to document changes that have occurred to a sample of beneficiaries over time. 3. Labor agreed with the need to carry out more systematic impact evaluations, but also emphasized that process evaluations are an important prerequisite of impact evaluations. Labor further commented that randomized trials are not appropriate for all projects, and requested that we specify where agencies should conduct randomized trials and where other methods would be more appropriate. We agree that process evaluations are important and revised the text to further differentiate process evaluations from impact evaluations. We believe each agency should determine whether randomized trials are appropriate for the specific project they are evaluating. 4. We made changes to the draft of this report on the basis of Labor’s specific technical comments, where appropriate. Cheryl Goodman, Assistant Director; Jeremy Latimer; Christina Werth; Todd M. Anderson; Gergana Danailova-Trainor; Terry Richardson; and Debbie Chung made key contributions to this report. In addition, Elizabeth Curda, Bruce Kutnick, Mary Moutsos, Barbara Stolz, and Susanna Kuebler provided technical or legal assistance. | Human trafficking--a worldwide crime involving the exploitation of men, women, and children for others' financial gain--is a violation of human rights. Victims are often lured or abducted and forced to work in involuntary servitude. Since 2001, the U.S. government has provided about $447 million to combat global human trafficking. As GAO previously reported, estimates of the number of trafficking victims are questionable. In this report, GAO examines (1) collaboration among organizations involved in international antitrafficking efforts, (2) U.S. government monitoring of antitrafficking projects and difficulties in evaluating these projects, and (3) suggestions for strengthening monitoring and evaluation. GAO analyzed agency documents; convened an expert panel; interviewed officials; and conducted fieldwork in Indonesia, Thailand, and Mexico. While governments, international organizations, and nongovernmental organizations have recognized the importance of collaborating and have established some coordination mechanisms and practices, they will need to overcome challenges that have impeded collaboration in the past for their efforts to be successful. In two of the three countries GAO visited, it found that host governments--which bear ultimate responsibility for combating trafficking within their borders--have passed national antitrafficking laws and enacted national action plans. However, organizations continue to face numerous challenges when collaborating to combat human trafficking, including varying levels of government commitment and capacity. For example, some governments treat foreign trafficking victims as illegal immigrants and deport rather than protect them. In addition, according to officials in two of the three countries GAO visited, the ministries responsible for coordinating antitrafficking efforts have limited authority and capacity. U.S. government-funded antitrafficking projects often lack some important elements that allow projects to be monitored, and little is known about project impact due to difficulties in conducting evaluations. Project documents GAO reviewed generally include monitoring elements, such as an overarching goal and related activities, but often lack other monitoring elements, such as targets for measuring performance. To oversee projects, State officials supplement their efforts with assistance from U.S. embassy staff, but have not established written guidance for oversight. Officials said that they are working to improve performance measures and develop monitoring guidance. Conducting impact evaluations of antitrafficking projects is difficult due to several factors, including questionable project-level estimates of the number of trafficking victims. These estimates are needed for baselines by which to evaluate how effectively specific interventions are reducing trafficking. Elements in the design of certain projects, such as objectives that are too broad, further impede evaluation. Because of these difficulties, few impact evaluations have been completed, and little is known about the impact of antitrafficking interventions. A GAO-convened panel of experts identified and discussed ways to address the factors that make it difficult to monitor and evaluate antitrafficking projects. Panelists' suggested approaches included improving information on the nature and severity of trafficking and addressing monitoring and evaluation in project design. To improve information on trafficking, panelists suggested methods that have been used to sample other hard-to-reach populations, including domestic violence victims. One suggested method is sampling of "hot spots"--an intensive search for victims in areas known to have high concentrations of victims. To address weaknesses in project design that impede monitoring and evaluation, panelists suggested that officials design projects that clearly link activities to intended outcomes, identify measurable indicators, and establish procedures for setting and modifying targets. |
The federal government holds in trust about 55 million acres of land for tribes and individual Native Americans, most of it on or near reservations. Sixty percent of the 2 million Native Americans live on trust lands or in the surrounding counties. Reservations range in size from the Navajo Reservation, the largest, with about 17 million acres, to California’s small reservations, called rancherias, which comprise just a few acres. For a map showing the locations of some of the Indian reservations on which trust lands are located, see appendix II. There are two major ownership categories for land held in trust by the federal government for Native Americans: (1) tribal trust and (2) individual trust. Tribal trust lands are areas set aside and held in trust by the federal government for the use and benefit of tribes. Individual trust lands are areas set aside by tribes or, in some cases, by the federal government that are held in trust by the federal government for the use and benefit of individual Native Americans. Of the approximately 55 million acres of trust lands, about 45 million are tribal trust lands, and 10 million are individual trust lands. All trust lands are subject to federal restrictions against alienation and encumbrance. In general, land is privately held without restrictions and can be used as collateral for the repayment of a mortgage loan. However, Native American trust lands generally cannot be transferred to non-Native Americans, which prevents Native Americans from using trust lands as collateral for mortgage loans. Only individual trust lands can be transferred to non-Native Americans and then only with the consent of the Native American landowner and approval by the Secretary of the Interior or an authorized representative. With these approvals, individual Native American trust lands can be used as collateral for mortgage loans. Pervasive joblessness and low wages have led to high poverty rates among Native Americans living on reservations. Half of these Native Americans have incomes below the poverty line. Also, the latest information available shows that in 1991, the average unemployment rate on 30 reservations with populations of 3,000 or more was 46 percent, according to the Bureau of Indian Affairs (BIA). BIA estimated that in 1990 only 25 percent of employed Native Americans living on or near reservations earned $7,000 or more annually, compared with 75 percent of the general U.S. population. In addition, housing conditions on Native American trust lands are much worse than those in other areas of the country: 40 percent of Native Americans on trust lands live in overcrowded or physically inadequate housing, compared with 6 percent of the overall U.S. population. Federal agencies provide nearly all of the housing—both owner-occupied and rental—developed on Native American trust lands. Four federal agencies—the Department of Housing and Urban Development (HUD), the Department of Veterans Affairs (VA), BIA, and the Department of Agriculture’s Rural Housing Service (RHS)—provide housing assistance through grants, subsidies, and loan guarantees and insurance. HUD provides the largest amount of assistance. From fiscal year 1986 through fiscal year 1995, HUD provided $4.3 billion (constant 1995 dollars) for housing and community development in tribal areas. Of this amount, HUD provided $3.9 billion to approximately 189 Indian housing authorities to develop and maintain affordable housing and to assist low-income renters. The authorities used those funds to construct over 24,000 single-family homes, to operate and maintain existing housing, and to encourage other development. Over the decade, HUD also provided direct block grants totaling more than $424 million to eligible tribes for community development and mortgage assistance. Appendix III contains a more detailed description of federal programs providing homeownership and rental assistance specifically to Native Americans. HUD’s Federal Housing Administration (FHA) and VA also operate programs that provide lenders with guarantees and insurance on personal property loans made to Native Americans for manufactured homes. According to the 1990 Census, 14 percent of Native American households on reservations lived in manufactured homes. The corresponding rate for all households in the United States was 7 percent and for Native American households not on reservations, 12 percent. Manufactured homes are primarily purchased with personal property loans, which may be easier to obtain than home purchase loans, especially for those who live in remote areas, have low incomes, or have inadequate credit histories. According to a 1995 Manufactured Housing Institute survey of lenders making manufactured home loans, about 92 percent of the loans were for the homes only, while 8 percent financed both the home and the land. In addition to the 91 conventional home purchase loans we identified in our work, at least another 22 conventional loans were made to Native Americans for purchasing manufactured homes on trust lands over the 5-year period ending in calendar year 1996. Few Native Americans have purchased homes on trust lands by using private, conventional financing. During the 5-year period of calendar year 1992 through 1996, lenders made only 91 conventional home purchase loans to Native Americans on trust lands. At our request, BIA surveyed all 83 of its Agency Offices in the continental United States to obtain their best estimates of the number of conventional home purchase loans made by private lenders on tribal and individual trust lands. Eight lenders in five states (Michigan, Montana, North Dakota, Washington, and Wisconsin) made the 91 loans to members of eight tribes. Three lenders in two states, Washington and Wisconsin, made 80 of the 91 loans to members of two tribes—the Tulalips and the Oneidas. All eight lenders have held the loans in their portfolios and have not sold them in the secondary mortgage market. Officials of three of the eight lenders told us they are large or medium-sized regional lenders, while officials from the other five told us they are small community lenders. Home purchase loans of any type made to Native Americans on trust lands, not just conventional home purchase loans, have been few in number. Even when home purchase loans can be nearly fully guaranteed or insured by HUD against loss, lenders have made few loans to Native Americans on trust lands. For example, HUD operates two mortgage guarantee and insurance programs specifically to foster Native American homeownership; but, as of September 30, 1997, lenders had made only 128 loans on trust lands since the inception of these programs in 1983 and 1995. Since the early 1980s, many studies and reports have documented the legal, social, and geographical barriers to financing conventional home purchase loans for Native Americans on trust lands. The bibliography at the end of this report lists these studies and reports. We found that the barriers identified in past studies and reports still exist today. The most significant barriers are that lenders (1) are uncertain about whether they can foreclose on Native American trust lands to recover their loan funds; (2) have difficulty understanding the implications of the different types of land ownership because of the complex status of Native American trust lands; (3) are unfamiliar with the tribal courts in which litigation is conducted in the event of a foreclosure; and (4) are concerned about the absence of housing ordinances governing foreclosures in tribal communities. While some of these barriers also apply to home purchase loans guaranteed or insured by the federal government, lenders are generally not as concerned about their risk on such loans because the federal government protects them against losses. Appendix IV discusses two other barriers identified in various studies and reports: the low socioeconomic status of Native Americans living on trust lands and the remoteness of those lands. The primary barrier identified by the studies and reports we reviewed is the uncertainty lenders have about recovering the outstanding loan balance on a home on trust lands if the borrower defaults and a foreclosure results. This uncertainty is created by the inalienable status of trust lands, which can prevent individuals from using the land for loan collateral. For example, in May 1996, the Urban Institute reported that the primary legal obstacle lenders perceived in making mortgage loans to Native Americans on trust lands is the difficulty in recovering outstanding loan amounts in cases of default. Reports by the National Commission on American Indian, Alaska Native, and Native Hawaiian Housing in 1992 and the Presidential Commission on Indian Reservation Economies in 1984 also stated that lenders are concerned about loan security and their ability to reclaim assets in cases of foreclosure on trust lands. In addition, a 1983 BIA report on the obstacles to economic growth on Indian reservations pointed out lenders’ concern that their recourse may be limited in cases of home loan defaults. Land ownership within many Indian reservations is very complex. Land within the geographic boundaries of a reservation may be owned by the tribe; by individual Native Americans or non-Native Americans; and by the federal, state, or local governments. On many reservations, the different types of land ownership create a “checkerboard” pattern of ownership. As discussed previously, there are two major ownership categories for land held in trust by the federal government for Native Americans: tribal trust and individual trust. In addition, reservations can also include privately held lands, which do not have the same restrictions as trust lands. These types of land ownership create jurisdictional problems as each type is subject to different laws—frequently a significant source of uncertainty to private lenders in encumbrancing property. Trust lands’ ownership status is further complicated by the differences in the appropriate collateral for mortgages. Generally, lenders secure mortgage loans with ownership interests in real property or leaseholds.The trust lands’ ownership status—tribal or individual—determines whether home buyers can secure loans involving these lands by using ownership interests in the property or leaseholds. Loans involving tribal trust lands can be secured by leasehold interests, but federal law generally prohibits a lender from obtaining an ownership interest in such lands. In an attempt to make it easier for Native Americans to finance homes on tribal trust lands, recent legislation increased the leasehold period from 25 to 50 years. For individual trust lands—lands given to individuals by a tribe or the federal government—lenders may secure the individual’s ownership interest in the property with the Native American landowner’s and BIA’s approval. Individual trust lands can lose their trust status in the event of a foreclosure under these conditions and leave Native American ownership. Generally, disputes involving housing foreclosure transactions between tribes and individual Native Americans and non-Native Americans are subject to the jurisdiction of tribal courts. State courts do not have jurisdiction over suits brought by Native and non-Native Americans on matters involving trust lands. Because of their unfamiliarity with tribal courts, lenders are usually reluctant to risk their capital if the only forum for litigation is tribal courts, according to a report by a Native American consulting firm. Although lenders may specify guidelines for repayment as a condition of mortgage loans, in most cases lenders must use tribal courts to enforce repayment requirements. Most lenders have little or no experience with tribal courts that have jurisdiction over foreclosure proceedings. Also, lenders are reluctant to press their claims in tribal courts for fear that tribal courts will not protect the property rights of non-Native Americans by according them due process of law, according to a report by the Presidential Commission on Indian Reservation Economies. Few tribes have enacted housing ordinances, and many have not defined foreclosure procedures, factors that make lenders hesitant to make conventional home purchase loans to Native Americans on trust lands, according to a draft report by the National American Indian Housing Council. Moreover, there are, for the most part, no laws or processes operating on trust lands governing how, or whether, lenders can take possession of collateral in the event of a foreclosure. Officials of one lender in the Northwest told us that formulating the housing ordinances necessary for lending on trust lands is time-consuming and costly. This lender has been working with a tribe to develop a housing ordinance for over a year. This effort was the impetus for the lender’s writing a model housing ordinance that includes provisions for foreclosures, evictions, and land access, among other provisions. Even with a model tribal housing ordinance, the officials expect the negotiations with other tribes to take considerable time because each tribe will want different provisions in its housing ordinance. Moreover, the officials stated that because each tribe’s interests and circumstances are different, a lending agreement formulated at one tribe is not necessarily transferable to another tribe. Although the barriers to conventional home purchase lending to Native Americans on trust lands are formidable, some lenders have found ways to overcome them. We found that the lenders that made the 91 conventional home purchase loans to Native Americans on trust lands during the 5-year period of calendar years 1992 through 1996 did so by creating special programs or using long-standing relationships with tribes and their members to facilitate lending. The special programs emphasized homeownership counseling and the negotiation of housing ordinances. In addition, some public and private organizations are developing initiatives that could simplify and may have some potential to increase conventional home purchase lending to Native Americans on trust lands. The eight lenders that made the 91 conventional home purchase loans to Native Americans on trust lands either created special lending programs or relied on long-standing relationships with tribes and tribe members as their assurance against potential foreclosures. These lenders told us that they initiated the activities that led to these loans because they recognized the critical housing needs of Native Americans or had a long history of providing many types of financial services to the tribes and their members. All eight lenders reported that they had not lowered their underwriting standards in making the 91 loans and that counseling borrowers on homeownership responsibilities was invaluable. Some lenders negotiated tribal housing ordinances addressing foreclosures, but others did not. Washington Mutual Bank, a large regional lender located in Seattle, Washington, is the largest home mortgage lender and one of the largest banks, in terms of assets, in the Pacific Northwest. Twenty tribes are located within the bank’s service areas in Washington, Oregon, and Idaho. While additional tribes are located within the bank’s service areas in Montana and Utah, bank officials told us that they are not providing services to these tribes because their locations are so remote. During the 5-year period ending in calendar year 1996, Washington Mutual Bank made nine conventional home purchase loans to Tulalip Tribe members on individual trust lands. Bank officials told us they initiated the lending program because they recognized the critical housing needs of the reservation-based Native Americans in their service areas. According to bank officials, Tulalip Tribe members did not understand that establishing a history of financial relationships and the prudent use of credit was required to qualify for a home purchase loan. Moreover, bank officials found that the tribe’s members were often more comfortable obtaining this kind of information from other tribe members than from the bank’s representatives. The Tulalip Housing Authority has played an important role in educating tribe members on the bank’s home purchase loan requirements. For example, the authority has identified and counseled tribe members who potentially meet Washington Mutual Bank’s underwriting standards for conventional home purchase loans. According to bank officials, such assistance is invaluable because it acquaints the tribe’s members with homeownership requirements and responsibilities, prequalifies potential borrowers, and provides lenders with contact points in formulating agreements for conventional home purchase lending. Under the bank’s conventional home purchase lending program, tribes must establish housing ordinances that cover tribal foreclosure procedures, evictions, and land access rights. In addition, the housing ordinances must contain provisions for the bank to have the first opportunity to recover assets in cases of foreclosure. Moreover, the bank requires that housing ordinances contain no land sale restrictions should foreclosures occur on individual trust lands. The bank and Tulalip Tribe officials negotiated a housing ordinance that contains these provisions for the conventional loans the bank has made. While no foreclosures have occurred on the nine loans made to Tulalip Tribe members, Washington Mutual Bank officials told us they would make a concerted effort to provide the tribe with the first right of purchase before instituting a foreclosure. For the nine loans made to Tulalip Tribe members, the bank used its standard underwriting criteria and made the loans at the current fixed or adjustable interest rates. The bank provided loans for 90 percent of the value of each home, and the tribe members obtaining the loans made down payments of 10 percent. The process for approving and closing conventional home purchase loans involves not only bank officials and the individual borrowers, but also tribal and federal government officials. A flow chart detailing Washington Mutual Bank’s process for making conventional home purchase loans on Native American trust lands is in appendix V. Bank officials told us that this process is much more time-consuming than that for conventional home purchase loans involving privately held lands and substantially reduces loan volume. Nevertheless, Washington Mutual Bank is preparing to offer conventional home purchase loans to the members of a second tribe, the Lummi. According to bank officials, Lummi Tribe members are interested in passing housing ordinances that will enable them to use the bank’s home loan programs on individual trust lands. Associated Bank of Green Bay is a large regional lender located in Green Bay, Wisconsin. The bank’s service area consists of five counties in northeastern Wisconsin. The Oneidas are the only tribe in the bank’s service area. Associated Bank made 56 conventional home purchase loans on tribal and individual trust lands to members of the Oneida Tribe during the 5-year period ending in calendar year 1996. Bank officials told us they initiated conventional home purchase lending for the Oneida Tribe because the bank had a long history of providing many types of services to the tribe and its members. Moreover, the officials stated that they were aware that a market for conventional home purchase loans existed among the Oneidas. The Oneida Tribe has provided homeownership and credit counseling for its members that, according to bank officials, was very beneficial for both the tribe’s members and for the bank because it prepared the borrowers well for homeownership responsibilities. The 56 conventional home purchase loans made to Oneida Tribe members on tribal and individual trust lands were 1-, 3-, 5-, or 7-year adjustable rate mortgages. Associated Bank provided financing for 80 percent of the value of the homes. To help with down payments, the Oneida Tribe provided borrowers with low-interest loans of up to 20 percent of the value of the homes. Associated Bank officials said they did not modify their underwriting standards in making the home purchase loans to the Oneida Tribe members. Before initiating any conventional home purchase lending, Associated Bank officials reviewed the Oneidas’ tribal housing ordinance and found it acceptable for lending. Bank officials told us the Oneida Tribe has the first option to purchase property should foreclosures occur. First Heritage Bank is a small community bank located in Marysville, Washington. The bank’s lending area consists of Snohomish County, Washington. Two tribes, the Tulalip and the Stillaguamish, are located in the bank’s service area. During the 5-year period ending in calendar year 1996, First Heritage Bank made 16 conventional home purchase loans on individual trust lands to Tulalip Tribe members. Because of the bank’s long-standing personal and business relationships with the tribe and its members, it made these loans without negotiating a housing ordinance. Bank officials told us that they have provided many types of banking services, such as savings and checking accounts and business and consumer loans to the Tulalip Tribe and its members for many years. Should a foreclosure occur, the land, since it is individual trust land, would transfer out of trust and would then be sold to any qualified buyer, according to bank officials. First Heritage Bank’s 16 conventional home purchase loans were for 75 percent of the value of the homes. Bank officials told us that the borrowers usually provided down payments of 25 percent. Moreover, the bank accepted the tribe member’s equity in individual trust land when a borrower could not provide a down payment. The bank did not modify its underwriting standards in making home purchase loans to Tulalip Tribe members, according to officials. Each of the remaining five small or medium-sized community lenders made four or fewer conventional home purchase loans to Native Americans on individual trust lands during the 5-year period ending in calendar year 1996. Four of the five lenders made the loans because of long-standing relationships with the tribes and their members. For example, First State Bank of Rolla, a small community bank located in Rolla, North Dakota, made four conventional home purchase loans to members of the Turtle Mountain Chippewa Tribe on individual trust lands. An official told us that the bank has provided a variety of services to members of the tribe over many years. This official also said that because of this long-standing relationship, the bank made the loans in spite of the lack of foreclosure provisions in the tribal housing ordinance. Bank officials are, however, negotiating foreclosure provisions with the tribe. Currently, bank and tribe officials have an understanding that should a foreclosure occur, another Turtle Mountain Chippewa Tribe member would be likely to have the first option to purchase the property, according to a bank official. Some federal agencies, public and private institutions, and nonprofit organizations are beginning to direct some of their financial resources and housing expertise to expanding opportunities for Native Americans to buy homes. While some of these efforts address the broader issue of Native Americans’ access to credit of all kinds, officials from these organizations share a common belief that improving access to credit will enhance the ability of Native Americans to purchase homes without government assistance. They also believe that privately owned housing is a likely source of economic growth for Native Americans living on reservations. The Federal National Mortgage Association’s (Fannie Mae) Native American lending initiatives are part of the organization’s commitment to invest $1 trillion in affordable and decent housing for low- and moderate-income American families. Fannie Mae has not set a specific funding level for its investments on Native American trust lands. Under its initiatives, Fannie Mae accepts tribes’ resale restrictions and tribal jurisdiction over mortgage lending that helps to preserve the trust status of Native American lands. Fannie Mae’s lending initiatives for Native Americans involve both conventional and federally supported mortgage loans. The conventional loan effort for Native Americans began in 1994 when Fannie Mae formed a task force to assess the business and legal risks associated with conventional lending on trust lands. Fannie Mae developed standard loan documents and agreements for conventional lending on trust lands and negotiated special transactions with tribes. In 1996, Fannie Mae began approving tribes for conventional lending under agreements with private mortgage and title insurers to provide their services on trust lands. The Navajos were the first tribe approved under Fannie Mae’s conventional lending initiative and the first tribe to be approved for all of Fannie Mae’s Native American initiatives. While no conventional loans had closed for members of the Navajo Tribe as of November 17, 1997, Fannie Mae was working on loans with its lender partners. Fannie Mae also had approved the Cochiti Pueblo and the Fort Mojave Tribe for conventional lending on trust lands and was reviewing requests from other tribes interested in making conventional lending available to their members. Fannie Mae’s lending initiatives for Native Americans involving federally supported mortgage loans began in 1995 when it approved loans for purchase made under HUD’s guarantee and insurance programs on trust lands. Fannie Mae also entered into a partnership with the U.S. Department of Agriculture’s Rural Housing Service (RHS) to create a pilot program under which RHS guarantees mortgage loans on Indian reservations and tribal trust lands. Also, in 1997 Fannie Mae issued the first mortgage-backed security to be backed 100 percent by loans to Native Americans and approved its first product for Native Hawaiian homelands. A more detailed description of RHS’ and Fannie Mae’s efforts is in appendix III. The Navajo Partnership for Housing, Inc., is a partnership of residents, tribal and nontribal government representatives, and the business community that, among other things, creates opportunities for working Navajo families with moderate to high incomes to own their homes. According to the Executive Director of the partnership, the Navajos will need about 20,000 additional housing units by the year 2000. The Director also told us that Navajo Tribe members need conventional home purchase lending because traditional HUD housing programs are not fully addressing their needs. The Director added that conventional home purchase lending is important for tribal economic development because equity in homes can provide a source of capital for business and job creation on the reservation. The partnership’s initial goals are to (1) develop a guide that describes the home purchase lending process and homeownership responsibilities, (2) counsel 300 families in preparation for homeownership, (3) assist in the development of 150 housing units, and (4) attract over $10 million in private capital for homeownership. The guide will describe the processes required for approving and closing home purchase loans, including credit reviews, and leasehold and title clearances. In addition, the partnership plans to identify potential borrowers for conventional home purchase loans and to provide home buyer and credit counseling to all interested Navajo families. The partnership’s Executive Director told us that counseling is very important because many Navajos have no experience with getting home loans or dealing with private lenders and will be first-generation home buyers. The importance of counseling for Navajo Tribe members became evident in 1996 when the partnership counseled 800 families to determine their eligibility for and interest in obtaining conventional home purchase loans. Of these 800 families, the partnership has been working with 70 families interested in homeownership, but only 1 of the families could be financially prequalified for a conventional home purchase loan. The partnership is counseling the other families to help them resolve financial and other problems so that they can obtain home purchase loans. To address lenders’ concerns about potential loan defaults, the partnership has arranged for lenders to contact the partnership before borrowers become significantly delinquent. The partnership plans to counsel delinquent borrowers in an effort to make the loans current and avoid foreclosure. According to the partnership’s Executive Director, the lenders are pleased with this arrangement because it will save them money in servicing the loans. Also, he told us many tribe members are more satisfied with being counseled by the partnership than by a lender’s representative. The Office of the Comptroller of the Currency (OCC) and the Federal Home Loan Bank (FHLBank) System have efforts under way to address the broader issue of Native Americans’ access to financing and capital. These efforts, if successful, could expand conventional home purchase lending for Native Americans on trust lands. In 1994, OCC launched a three-part strategy to improve financial services for Native Americans on trust lands that encompasses vigorous enforcement of the federal fair lending and community reinvestment statutes to eliminate discrimination and promote opportunities for Native Americans; the creation of partnerships among lenders, tribal governments, and community organizations to promote information-sharing and the development of innovative solutions to the financial services problems of Native Americans; and educational efforts to help lenders and Native Americans understand and address the unique set of legal and culture complexities that make lending on trust lands more challenging than lending in other low- and moderate-income communities. OCC, along with the other federal financial supervisory agencies, revised the Community Reinvestment Act (CRA) regulations to specifically inform banks that lending, investing, and providing banking services to Native Americans on trust lands will receive favorable regulatory consideration. The act was designed to encourage banks to provide credit to their entire market areas, including low- and moderate-income areas. It requires federal bank and thrift regulators to evaluate, during periodic examinations, the extent to which banks are fulfilling their lending, investment, and service responsibilities in their areas. On the basis of these assessments, the regulators assign the banks overall ratings, ranging from outstanding to substantial noncompliance. An institution’s CRA rating may affect approval by the regulators of certain types of applications and the public’s perception of the institution. The regulators are required to take a depository institution’s CRA rating into account when considering applications for expansions, such as mergers and acquisitions. Through its Affordable Housing Program and its Community Investment Program, the FHLBank System can help support a variety of low-income housing initiatives, including those on Native American trust lands. The Affordable Housing Program provides direct subsidies or reduced-rate loans to financial institutions to help them support the development of owner-occupied or rental housing that is affordable to households with incomes below 80 percent of the area median. Lenders pass on the subsidies to developers of affordable housing, such as Indian housing authorities, tribal councils, or community development corporations. The Community Investment Program provides long-term mortgage funds essentially at cost to lenders to facilitate homeownership for households with incomes below 115 percent of the area median. Both programs, which are paid for by the FHLBank System’s earnings, can be used with federal guarantee or insurance programs to support a private lender’s home purchase lending. In addition, a tribal housing corporation, Indian housing authority, or tribally designated housing entity may become a nonmember mortgagee of the particular FHLBank serving the area and become eligible for advances (loans) directly from the FHLBank. In 1996, approximately $777,000 in funds from the Affordable Housing Program were made available to benefit Native Americans. The extent to which the Native American Housing Assistance and Self-Determination Act of 1996 (NAHASDA) will increase conventional home purchase lending for Native Americans on trust lands is uncertain. The act contains provisions that allow tribes to leverage grant funds and to extend land lease terms from 25 to 50 years. However, whether tribes can or will use leveraged funds to encourage conventional home purchase lending on trust lands is uncertain, and many tribes’ land lease terms already exceeded 25 years. A major objective of NAHASDA, which became effective October 1, 1997, is to promote the development of private capital markets and to allow those markets to operate and grow. To accomplish this objective, NAHASDA authorizes HUD to make block grants to tribes that submit housing plans that comply with the program’s requirements. Tribes that receive block grant funds will be able to leverage some of the funds by creating partnerships with private lenders for the acquisition, new construction, reconstruction, or rehabilitation of affordable housing. However, NAHASDA does not require tribes to use leveraged grant funds to encourage conventional home purchase lending on trust lands. The Acting Deputy Assistant Secretary for HUD’s Office of Native American Programs told us that while NAHASDA will provide additional housing to Native Americans, it is difficult to predict how tribes will use leveraged funds or whether tribes’ use of such funds could result in expanded conventional home purchase lending on trust lands. In addition, NAHASDA amended federal law to allow for a lease term on tribal trust lands of up to 50 years. Previously, many tribes had 25-year leasing authority, which, with BIA’s approval could have been renewed for another 25 years. The Comptroller of the Currency, in its in July 1997 Guide to Mortgage Lending in Indian Country, stated that there were some concerns that the former 25-year limit may have discouraged some financial institutions from extending home purchase loans on trust lands, since many loans carry a 30-year term. But whether this provision will encourage more conventional home purchase lending on Native American trust lands is questionable, according to the Acting Deputy Assistant Secretary, because many tribes already had lease terms that either extended well beyond 25 years or were for 25 years with an automatic 25-year extension. The Navajo Tribe, for example, gives its members 65-year leases. The Acting Deputy Assistant Secretary said that she was not aware of any situation in which a private lender had not made a home purchase loan on Native American trust lands because land lease terms expired after 25 years. She told us that HUD expected to issue regulations implementing NAHASDA in early 1998. Privately supported opportunities for Native Americans to own homes on trust lands are limited. If private mortgage lenders made more conventional home purchase loans to Native Americans on trust lands, they could help expand homeownership opportunities and reduce the burden on government agencies to design, administer, and finance special homeownership programs for these Americans. However, formidable barriers exist, such as limitations on the use of trust lands as collateral. Nevertheless, homeownership initiatives undertaken by a few private lenders and some public and private organizations demonstrate that there is some potential for overcoming these barriers. While these efforts are noteworthy, conventional home purchase loans are unlikely to become a major source of financing for Native Americans on trust lands. Even if the barriers to conventional home purchase lending are eliminated, the economic status of many Native Americans on trust lands may preclude them from qualifying for these loans. The small number of home purchase loans made to Native Americans on trust lands, even when private lenders are protected against most losses by federal mortgage guarantee and insurance programs, illustrates the limited potential for conventional home purchase loans for Native Americans on trust lands. We provided the departments of the Interior and Housing and Urban Development with a draft of this report for their review and comment. We received written comments on the draft report from Interior. (See app. VI.) In addition, the Department of Housing and Urban Development’s Acting Deputy Assistant Secretary for Native American Programs provided us with two changes that clarified information contained in the report, which we incorporated. We also discussed applicable sections of this report with officials of the Federal National Mortgage Association; the Navajo Partnership for Housing, Inc.; Washington Mutual Bank; Associated Bank of Green Bay; First Heritage Bank; First State Bank of Rolla; the Office of the Comptroller of the Currency; the Federal Housing Finance Board; the Rural Housing Service; and the Department of Veterans Affairs. We incorporated these organizations’ clarifying comments into the report where appropriate. Interior expressed concern that our conclusions placed too much emphasis on the “trust” status of lands and commented that there are federal agencies that routinely make loans on trust lands and three of the four barriers to conventional home purchase loans cited in our report are not related to the trust status of the lands. Specifically, Interior stated that the limitations discussed in our report are the concerns of private mortgage lenders and that federal agencies, such as the Department of Agriculture’s Farm Service Agency, routinely make farm and ranch operations loans on trust lands. Interior also commented that the Farm Service Agency also had many concerns about mortgages on trust lands, but it became knowledgeable of the process and now makes numerous loans. Private mortgage lenders need to do the same, Interior stated. Our assessment of the barriers to conventional home purchase financing focused on trust lands because of the Chairman of the Senate Committee on Indian Affairs’ interest in increasing homeownership opportunities for Native Americans on such lands. About 1.2 million Native Americans, or 60 percent of all Native Americans, live on trust lands or in the surrounding counties. Private mortgage lenders, as they become knowledgeable of the process of making home purchase loans on trust lands, may increase the number of such loans. However, our concern, as pointed out in our report, is whether conventional home purchase loans are likely to become a major source of financing for Native Americans on trust lands. We believe the small number of home purchase loans made to Native Americans on trust lands, even when private lenders are protected against most losses by federal mortgage guarantee and insurance programs, illustrates the limited potential for such conventional home purchase loans. Also, the farm and ranch operations loans made by the Farm Service Agency on trust lands differ in important ways from the loans that are the focus of our report—conventional home purchase loans. The Farm Service Agency’s loans are made or guaranteed by the federal government, which incurs all or most of the loss that may occur if the loans are not repaid. Conventional home purchase loans are made by private lenders without federal assistance, such as federal loan guarantees or insurance. Losses on these loans are absorbed by private lenders or other entities in the mortgage finance market. Regarding Interior’s comment that three of the four barriers cited in our report—(1) uncertainty about recovering funds in the event of a foreclosure, (2) unfamiliarity with the tribal court procedures associated with foreclosure, and (3) the absence of tribal housing ordinances—are not related to the trust status of the lands, we believe that it is in fact because of the trust status that these barriers arise. Generally, on privately held lands, lenders are not uncertain about whether they can recover outstanding loan balances if foreclosures occur. For example, if a borrower defaults and a foreclosure results, a lender has the ability to take possession of the collateral that in many cases is the land and the improvements on that land. It should also be noted that our report discusses six barriers to conventional home purchase loans and not four as stated by Interior. Interior also provided us with changes that clarified information in the report on the transfer and definition of trust lands, the cost and time required to eliminate the backlog of requests for title documents, and the cost to examine titles by and the number of land ownership interests under the jurisdiction of the Aberdeen Land Titles and Records Office. We incorporated the clarifications in the report. To determine the number of conventional home purchase loans made on Native American trust lands, we asked the Bureau of Indian Affairs (BIA) to survey its 83 Agency Offices in the continental United States to identify the number of such loans these offices had approved from calendar year 1992 through 1996. To identify the major barriers preventing conventional home purchase lending on Native American trust lands, we reviewed studies and reports from 1983 to the present. (See the bibliography for a list of these studies and reports.) We also visited and interviewed members of the Navajo Tribe and the Oneida Tribe of Indians of Wisconsin and lenders that made conventional home purchase loans on Native American trust lands. To document the efforts being made to facilitate conventional home purchase lending to Native Americans on trust lands, we analyzed BIA’s survey results and interviewed representatives of the Department of Housing and Urban Development (HUD), the Federal National Mortgage Association (Fannie Mae), the National American Indian Housing Council, tribes, and lending organizations to locate lenders that made such loans during the 5-year period ending in calendar year 1996. Moreover, we interviewed representatives from Fannie Mae, the Navajo Partnership for Housing, Inc., and other appropriate organizations to identify initiatives under way to facilitate conventional home purchase lending for Native Americans on trust lands. To learn whether implementing the Native American Housing Assistance and Self-Determination Act of 1996 will result in more conventional home purchase loans being made to Native Americans on trust lands, we reviewed the act; reviewed literature on the act; and interviewed representatives of HUD, lending organizations, and tribal organizations to gain their perspectives on the law. To determine whether BIA’s backlog of requests for certifying documents has deterred conventional home purchase lending to Native Americans on trust lands, we visited three of BIA’s five Land Titles and Records Offices in the continental United States: Aberdeen, South Dakota; Albuquerque, New Mexico; and Portland, Oregon. We visited these three offices because, among other things, they process most of the documents related to the status of trust lands. Appendix VII provides additional details on our scope and methodology. We conducted our review from April 1997 through January 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate Senate and House committees; the Secretary of HUD; the Secretary of the Interior; the Commissioner of Indian Affairs, BIA; and the Director, Office of Management and Budget. We will make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report are listed in appendix VIII. Before transactions that affect the status of trust lands, including conventional home purchase loans, can be legally completed, the Bureau of Indian Affairs (BIA) must issue a certified title status report for the land that, among other things, certifies current ownership. Although BIA currently has a backlog of requests for title documents that it estimates will take 113 staff years to eliminate, the backlog has not been a deterrent to conventional home purchase lending to Native Americans on trust lands, according to the lenders we interviewed. If the volume of conventional home purchase loans were to increase, however, BIA’s backlog could become a deterrent. The U.S. government holds title to Native American trust lands to prevent loss of these lands to individuals and to state and local governments. Thus, most trust lands cannot be encumbered, conveyed, taxed, or used as collateral for the repayment of a debt without the approval of the Secretary of the Interior. Within the Department of the Interior, BIA is responsible for protecting trust lands from “alienation,” that is, for preventing the transfer of the land’s ownership to non-Native Americans. Key components of BIA’s responsibilities are maintaining land ownership records and title documents and issuing title status reports to lenders certifying the legal description of tracts of trust lands. BIA also certifies current ownership, including any applicable conditions, exceptions, restrictions, or any encumbrances on record, and determines whether the lands are held in tribal or individual trust. All commercial businesses and financial institutions rely on certified title status reports (the official land titles) to develop, mortgage, or secure trust lands and resources. Thus, if the certified title is not accurate and up to date, a business or financial institution that is conducting business with tribes or Native Americans either will not enter into a transaction until the certified title can be obtained or will rely on an out-of-date or inaccurate title to their detriment. In the continental United States, BIA’s five Land Titles and Records Offices (LTRO)—and three smaller Land Service Offices—issue the certified title status reports lenders need before they make home purchase loans on trust lands. Despite a large backlog of requests for title documents, BIA’s certification process for trust lands has not been a deterrent to conventional home purchase lending. When asked about barriers to lending, the four lenders that made 82 of the 91 conventional home purchase loans on trust lands during the 5-year period ending in calendar year 1996 did not identify BIA’s issuance of certified title status reports as a barrier. During our visits to three of the five LTROs, we found that two of them made issuing certified title status reports for mortgages their highest priority. These LTROs—Aberdeen and Portland—processed certified title status report requests for conventional home purchase loans within a few days to a few weeks. The remaining LTRO we visited, Albuquerque, generally prioritized requests for certified title status reports by the dollar value of the transactions. We were unable to determine how long the Albuquerque LTRO took to process requests because no conventional home purchase lending activity occurred on trust lands within its jurisdiction over the 5-year period we studied. Few conventional home purchase loans have been made to Native Americans on trust lands. However, should the volume of requests for certified title status reports for these loans increase, BIA’s certification process could become a barrier because of the substantial backlog of requests for title documents that need to be reviewed or cleared before loans are approved. BIA estimated that eliminating the backlog that existed as of April 30, 1997, would cost the agency over $8 million and take over 113 staff years at current levels. Table I.1 shows BIA’s estimated cost and time to eliminate the processing backlog by function. Total backlog (documents) Time to eliminate backlog (staff years) In September 1994 and subsequently in June 1996, we reported that BIA had a serious backlog in ownership determinations and record keeping that could have a significant impact on the accuracy of trust fund accounting data. Moreover, in our 1994 report we recommended that the Secretary of the Interior direct the Assistant Secretary for Indian Affairs to take immediate action to eliminate the backlog by reprogramming existing resources, hiring temporary employees, or contracting for services. In response to our recommendation, BIA issued a draft report on its Related Systems Improvement Project that recommended hiring contractors to bring various records, including land and ownership records, up to date. Because of budget reductions, however, BIA did not acquire the contracting services nor did it hire additional staff. In an April 1997 strategic plan for implementing Indian trust fund reforms, Interior’s Special Trustee for American Indians called for the elimination of the backlog of title and ownership determinations and record keeping. Also in April 1997, the Deputy Commissioner for Indian Affairs told us that she had assembled a team of Land Titles and Records Officers to visit each LTRO. In August 1997, BIA headquarters’ Land Records Officer told us that the team had completed its visits and had determined the extent of, and reasons for, the backlog. BIA’s 1997 Government Performance and Results Act Strategic Plan states that BIA’s goal is to eliminate the backlog by the year 2002. BIA estimates that its backlog of title status reports will increase in the future because of fractionated land ownership on trust lands and the U.S. Supreme Court decision in Babbitt v. Youpee, 117 S. Ct. 727 (1997). Some Native American land ownership becomes fractionated as the ownership interests are passed on through several generations of multiple heirs, with more and more people coming to own smaller and smaller interests in the land over time. The Indian Land Consolidation Act, 25 U.S.C. 2201, et seq., as amended, among other things, attempted to reduce the extent of fractionization within a reservation’s boundaries. A key provision of that act, section 207, as amended, generally provided that if an individual Native American has an ownership interest of 2 percent or less in a tract of land, that interest transfers to the tribe upon the individual’s death, provided that (1) it is not willed to another owner in the same tract and (2) the interest is not capable of earning $100 or more in any of the 5 years following the individual’s death. BIA had estimated that legislation eliminating or consolidating fractionated ownership interests of 2 percent or less might eliminate or consolidate over half of the title records. In February 1992, we reported that land fractionization had continued to increase at a rapid pace. We pointed out that in the years since the Indian Land Consolidation Act’s enactment, the number of small ownership interests (2 percent or less) at the 12 reservations that we reviewed had more than doubled, from about 305,000 to over 620,000 records. In addition, in Babbitt v. Youpee, the U.S. Supreme Court held that the amended section 207 of the Indian Land Consolidation Act was unconstitutional. This 1997 decision will place a significant workload on BIA because the agency must modify probate court orders and title data transactions to implement the Court’s decision. Administrative law judges across the country are issuing probate modification orders that direct BIA to distribute to the heirs at law thousands of real property interests that have been held in abeyance or previously distributed to the tribe involved. According to BIA, these probate orders will add to the business processing workloads at each LTRO and Agency Office nationwide. As of July 1997, BIA had not determined the costs and time that this additional processing effort will require. The Manager of the Aberdeen LTRO told us that fractionated land ownership is continuing to increase and is a major reason for the title status report backlog in his office’s jurisdiction. As fractionation increases, processing title status reports becomes more complex and time-consuming. An analysis by the Aberdeen LTRO of a particular land allotment it administers revealed, for instance, that the number of owners had increased from 1 in 1900 to 962 in 1990, while the number of documents affecting its chain-of-title increased from 1 to 306. The Aberdeen manager estimated that with current staff levels it would cost about $4,000 and take about 141 overtime hours to examine the title of this one allotment, starting from the original trust patent. He also told us his office has over 1 million individual land ownership interests in its jurisdiction. The effects of BIA’s backlog on federal and local governments, commercial businesses, tribes, and individual tribe members are many and generally carry substantial liability should parties rely on federal Indian land titles that are not up to date or accurate. The right to convey (e.g., through a deed or probate) or encumber (e.g., through a lease or right-of-way) interests in trust lands or resources is based on the title ownership in the lands or resources as determined and certified by BIA. Thus, the lack of up-to-date titles can result in (1) many land transactions being delayed until titles can be obtained, (2) the wrong people or the wrong quantum of interest being involved in the transactions, and (3) decisions to proceed with land transactions without obtaining the required up-to-date and certified land titles. Through eight programs operated by four different agencies, the federal government provided assistance totaling about $325 million in fiscal year 1997 for the construction, acquisition, and operation of housing for Native Americans. These programs include nonconventional (federally supported) home loan assistance programs and rental assistance programs. The Department of Housing and Urban Development (HUD), the Department of Veterans Affairs (VA), and the Department of Agriculture’s Rural Housing Service (RHS) operate housing programs specifically intended to help Native Americans become homeowners. HUD’s and RHS’ programs guarantee or insure private lenders against most losses, whereas VA’s program provides direct loans to Native Americans on trust lands. In addition, HUD, VA, and RHS operate other homeownership assistance programs that Native Americans can use. Moreover, broader population groups can also use these programs. Table III.1 shows the funding for homeownership assistance programs and the programs’ results for the fiscal year 1997. A description of the federal programs that provide homeownership opportunities specifically for Native Americans follows. Section 184 of the Housing and Community Development Act of 1992 authorizes HUD to operate an Indian Home Loan Guarantee Program to provide Native American families and Indian housing authorities access to sources of private financing that might otherwise not be available without a federal guarantee. Under the program, HUD guarantees loans made by private lenders to Native American families, tribes, or Indian housing authorities for constructing, acquiring, or rehabilitating single-family dwellings that are standard housing and are located on trust lands or lands under the jurisdiction of a tribe. The program aids families with incomes that exceed the limits for other assisted housing programs. The program can also indirectly benefit low- and moderate-income Native Americans. For example, HUD officials told us that when tribes or Indian housing authorities obtain Section 184 loans, they use the funds to construct rental homes for low- and moderate-income Native Americans. In fiscal year 1997, lenders closed 155 loans under the program. HUD’s Federal Housing Administration’s (FHA) Section 248 Mortgage Insurance Program was established in 1983. Under the program, FHA insures mortgage loans for Native Americans on trust lands whose higher incomes disqualify them from other federally subsidized housing programs available through the Indian housing authority. Fewer types of land are eligible for the Section 248 loans than for the Section 184 loans. For example, homes on privately held lands are not eligible for the Section 248 program. Section 248 insured loans are available for both new construction and existing properties. In fiscal year 1997, FHA insured 18 loans totaling $1.4 million. HUD is considering requesting that the Section 248 program be terminated because of the Office of Management and Budget’s interest in eliminating underutilized programs. VA guarantees home loans to all eligible veterans and makes direct loans to Native American veterans to purchase, construct, or rehabilitate homes on trust lands. VA’s guaranteed and direct loans are available to all eligible Native American veterans who meet credit and income requirements. In addition, the Native American Home Loan Program, a pilot program enacted in 1992, specifically targets Native American veterans on reservations. Native American veterans can borrow up to $80,000 to purchase, construct, or improve homes on tribal or individual trust lands. The tribe must sign a memorandum of understanding with VA verifying that foreclosure, lien, and eviction procedures have been enacted and establishing the jurisdiction of the tribal court. In fiscal year 1997, VA made 32 loans totaling $2.6 million. The Rural Housing Native American Pilot loan program was jointly developed by RHS and Fannie Mae and implemented in December 1995. Under the program, RHS guarantees home loans made by private lenders to individual Native Americans. The loans are for low- and moderate-income families who are first-time home buyers and who are located in rural areas on, among other types of property, tribal trust lands. Tribes must be approved by RHS and Fannie Mae to participate in the pilot before applicants are eligible for loans guaranteed by RHS. Fannie Mae must review the tribe’s laws to determine whether they provide adequate protection for mortgage lending. The tribe must enter into a memorandum of understanding with RHS and Fannie Mae to, among other things, ensure that lenders can enforce mortgage-related documents and can foreclose and evict through the tribal court if the need arises. Currently 21 tribes in 11 states are eligible for program approval. Private lenders had not made any loans under the program as of December 4, 1997. HUD and BIA also operate rental assistance programs specifically for Native Americans on trust lands. HUD administers the most widely used programs, which provide funding for lease purchase and rental subsidy purposes. BIA provides grants to Native Americans for housing improvements. Table III.2 shows the funding for the rental assistance programs and the programs’ results for fiscal year 1997. In addition to the barriers discussed in the body of this report, two other barriers have made it difficult to attract private financing for Native Americans to buy homes on trust lands: (1) the low socioeconomic status of many Native Americans on trust lands and (2) the remoteness of the Native American trust lands. Low income levels, the lack of credit histories, and seasonal, unstable employment have meant that many Native Americans on trust lands are often unable to qualify for conventional home purchase loans. To finance a home, an individual must have an income adequate enough to assure the lender that the loan can be repaid. In May 1996, the Urban Institute reported that low and unstable incomes were rated as a major barrier to homeownership by about 85 percent of the Indian housing authority directors and tribal staff surveyed. According to the Executive Director of the Navajo Partnership for Housing, Inc., many Navajo Tribe members with incomes adequate enough for obtaining conventional home purchase loans may still have inadequate credit histories. This official added that many Navajos do not understand what type of credit history is necessary for obtaining a conventional home purchase loan. Moreover, this official told us, most Navajos do not have experience with private lenders and will be first-generation home buyers. As we reported in March 1997, the remoteness of some tribal lands has created significant problems for housing development. In contrast to metropolitan areas, where basic infrastructure systems (sewers, landfills, electricity, water supply and treatment, and paved roads) are already in place, remote tribal areas require a large capital investment to create these systems to support new housing. Where infrastructure does not exist, housing must be built that is self-contained. For example, much of the housing constructed on Navajo lands is scattered across remote sites. According to one builder, the cost to provide infrastructure to these homesites is over $20,000 per home. Moreover, housing built in such locations must include water cisterns and septic tanks. In addition, homes on the Navajo lands are mainly solar-powered. If housing is developed in subdivisions, the infrastructure costs are lower but still significant. For instance, at one particular housing development on the Navajo reservation—containing a mix of rental and privately owned units—$8 million is needed to develop the necessary infrastructure (i.e., water lines and sewer system connections). Tribe adopts annual Indian Health Service (IHS) agreement (only if scattered site with well and septic tank) The following are GAO’s comment on the U.S. Department of the Interior letter dated January 5, 1998. 1. Interior inadvertently refers to the Farm Service Agency here as the Farm Services Administration. To determine the number of conventional home purchase loans made on Native American trust lands, we asked BIA to survey its 83 Agency Offices in the continental United States to identify the number of such loans they had approved. We used this approach because (1) BIA did not have a centralized database containing information on home purchase loans on trust lands, (2) BIA officials were not confident that the 83 Agency Offices transferred all their data to the 12 Area Offices that oversee the Agency Offices, and (3) the Agency Offices’ mortgage information did not show whether a mortgage was for the purchase of a conventional or a manufactured home, or was for the refinancing of a home, or whether it was a business loan, or a federal guaranteed or conventional loan. At our request, BIA asked its Agency Offices to identify the number of conventional home mortgage loans approved from calendar year 1992 through 1996 on Native American trust lands. We interviewed the BIA officials who responded to verify that the loans identified were conventional home purchase loans on Native American trust lands and to identify the lenders involved. To identify the major barriers preventing conventional home purchase lending on Native American trust lands, we reviewed reports and studies from 1983 to the present. (See the bibliography for a list of these reports and studies.) We also visited and interviewed members of the Navajo Tribe and the Oneida Tribe of Indians of Wisconsin and lenders that made conventional home purchase loans on Native American trust lands. Finally, we interviewed HUD and BIA officials about the barriers preventing conventional home purchase lending on Native American trust lands. To document the efforts under way to facilitate conventional home purchase lending to Native Americans on trust lands, we used BIA’s survey results and interviewed representatives of HUD, Fannie Mae, the National American Indian Housing Council, tribes, and lending organizations to locate lenders that made conventional home purchase loans on Native American trust lands during the 5-year period ending in calendar year 1996. We interviewed and obtained information from the eight lenders on their processes and additional efforts involved in making the 91 loans. In addition, we obtained information from the eight lenders on the reasons for making loans, the barriers encountered, and the actions taken to overcome the barriers. Moreover, we interviewed representatives from Fannie Mae; the Federal Home Loan Mortgage Corporation; the Navajo Partnership for Housing, Inc.; the Office of the Comptroller of the Currency; HUD; the Community Development Financial Investment Fund; the Housing Assistance Council; and the National American Indian Housing Council to identify the initiatives under way to facilitate conventional home purchase lending for Native Americans on trust lands. For each initiative identified, we obtained information on the program and the latest information available on the program’s outcomes. To learn whether implementing the Native American Housing Assistance and Self-Determination Act of 1996 will result in more conventional home purchase loans being made to Native Americans on trust lands, we reviewed the act; reviewed literature on the act; and interviewed representatives of HUD, lending organizations, and tribal organizations to gain their perspectives on the law. To determine whether BIA’s backlog of requests for certifying documents has deterred conventional home purchase lending to Native Americans on trust lands, we visited three of BIA’s five Land Titles and Records Offices in the continental United States: Aberdeen, South Dakota; Albuquerque, New Mexico; and Portland, Oregon. We visited these three offices because they process most of the documents related to the status of trust lands. In addition, these offices administer trust responsibilities for most of the trust lands in the continental United States. We also interviewed lenders, tribe representatives, and BIA and HUD officials, and obtained documents on BIA’s land titles and records processes. Charles Trimble Company Inc. Facilitating Tribal Access to Investment Financing. Omaha, Neb.: 1993. Council of Energy Resource Tribes. Tribal Energy, Economic and Environmental Policy Issues. Denver, Colo.: Mar. 1993. Housing Assistance Council. Case Studies on Lending in Indian Country. Washington, D.C.: June 1996. Housing Assistance Council. Demonstration of Building Indian Housing in Underserved Areas: An Evaluation and Recommendations. Washington, D.C.: Dec. 1993. Housing Assistance Council. Lending On Native American Lands. Washington, D.C.: Sept. 1996. Jarboe, Mark A. Doing Business in Indian Country. Minneapolis, Minn.: No date. Levitan, Sar A., and Elizabeth I. Miller. The Equivocal Prospects for Indian Reservations. Washington, D.C.: Center for Social Policy Studies, The George Washington University, May 1993. National American Indian Housing Council. Expanding Home Ownership Opportunities In Native American Communities: The Role of Private Sector Housing Finance (draft). Washington, D.C.: July 1997. National American Indian Housing Council. Profile of HUD Section 184 Program Loan Applicants (draft report). Washington, D.C.: July 1997. National Commission on American Indian, Alaska Native, and Native Hawaiian Housing. Final Report: Building the Future: A Blueprint for Change, “By Our Homes You Will Know Us” (final report). Washington, D.C.: 1992. National Commission on American Indian, Alaska Native, and Native Hawaiian Housing. Supplemental Report and Native American Housing Improvements Legislative Initiative. Washington, D.C.: 1993. National Indian Justice Center and the Office of Native American Programs of the U.S. Department of Housing and Urban Development. Our Home: Providing the Legal Infrastructure Necessary for Private Financing. Washington, D.C.: No date. National Indian Policy Center. Developing Financial Structure in Indian Country. Washington, D.C.: The George Washington University, May 1994. National Indian Policy Center. Establishing a Primary Mortgage Market in Indian Country. Washington, D.C.: The George Washington University, May 1994. Parker, Alan R., and Marguerite Gee. Survey of Indian Economic Development Issues. Report to the Assistant Secretary of the Interior for Indian Affairs. Washington, D.C.: Feb. 1983. Presidential Commission on Indian Reservation Economies. Report and Recommendations to the President of the United States. Washington, D.C.: Nov. 1984. Stringer, William L., and C. Eric Olsen. Tribal Debt Financing in the United States. Washington, D.C.: Mar. 1992. The National Tribal Development Association and Eagle Associates. Report on the Unmet Physical Infrastructure Needs on Indian Reservations Nationwide. Box Elder, Mont.: Dec. 1995. Tribal Leaders Economic Summit. Building Reservation Economies and Sustainable Homelands. A Report to the Clinton-Gore Administration. Washington, D.C.: Jan. 1993. U.S. General Accounting Office. Native American Housing: Information on HUD’s Housing Programs for Native Americans. GAO/RCED-97-64, Mar. 28, 1997. Urban Institute. Assessment of American Indian Housing Needs and Programs: Final Report. Washington, D.C.: May 1996. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the homeownership opportunities for Native Americans on trust lands through private, conventional lending, focusing on: (1) the number of conventional home purchase loans private lenders made to Native Americans on trust lands; (2) the major barriers to conventional home purchase lending to Native Americans on trust lands; (3) efforts to facilitate conventional home purchase lending to Native Americans on trust lands; (4) whether the implementation of the Native American Housing Assistance and Self-Determination Act of 1996 will result in more conventional home purchase loans being made to Native Americans on trust lands; and (5) whether the backlog at the Bureau of Indian Affairs of requests for certifying documents affecting the legal status of the trust lands has been a deterrent to conventional home purchase lending to Native Americans. GAO noted that: (1) few Native Americans have purchased homes on trust lands by using private, conventional financing; (2) during the 5-year period of calendar year 1992 through 1996, lenders made only 91 conventional home purchase loans to Native Americans on trust lands; (3) moreover, of the 91 such loans GAO identified, 80 were made to the members of two tribes--the Tulalips of Marysville, Washington, and the Oneida Tribe of Indians of Wisconsin; (4) making conventional home purchase loans on Native American trust lands involves overcoming long-standing barriers; (5) the most significant barriers are that lenders: (a) are uncertain about whether they can foreclose on Native American trust lands to recover their loan funds; (b) have difficulty understanding the implications of the different types of land ownership because of the complex status of Native American trust lands; (c) are unfamilar with the tribal courts in which litigation is conducted in the event of a foreclosure; and (d) are concerned about the absence of housing ordinances governing foreclosure in tribal communities; (6) some mortgage lenders, as well as public and private organizations, have initiated efforts to increase Native Americans' opportunities to finance homes on trust lands with conventional home purchase loans; (7) to make the 91 loans GAO identified, lenders created special programs emphasizing the development of housing ordinances and homeownership counseling services or used long-standing relationships with tribes and tribe members; (8) other broader public and private efforts begun recently, such as the Federal National Mortgage Association's lending initiatives for Native Americans, may have some potential for increasing the number of conventional home purchase loans on trust lands; (9) other efforts by the Federal Home Loan Bank System and the Office of the Comptroller of the Currency may have some potential for improving Native Americans' overall access to financing and capital, which may, among other things, encourage more conventional home purchase loans on trust lands; (10) the extent to which the Native American Housing Assistance and Self-Determination Act of 1996 will increase conventional home purchase lending for Native Americans on trust lands is uncertain; (11) this act, which became effective on October 1, 1997, contains provisions that allow tribes to leverage housing block grant funds and extend land lease terms from 25 to 50 years; and (12) however, whether tribes can or will use leveraged funds to encourage conventional home purchase lending is uncertain, and many tribes' land lease terms have already exceeded 25 years. |
The origination, securitization, and servicing of mortgage loans involve multiple entities. In recent years, originating lenders generally have sold or assigned their interest in loans to other financial institutions to securitize the mortgages. Through securitization, the purchasers of these mortgages then package them into pools and issue securities for which the mortgages serve as collateral. These mortgage-backed securities (MBS) pay interest and principal to their investors, such as other financial institutions, pension funds, or mutual funds. After an originator sells its loans, another entity is usually appointed as the servicer. Servicing duties can involve sending borrowers monthly account statements, answering customer service inquiries, collecting mortgage payments, maintaining escrow accounts for taxes and insurance, and forwarding payments to the mortgage owners. If a borrower becomes delinquent on loan payments, servicers also initiate and conduct a foreclosure in order to obtain the proceeds from the sale of the property on behalf of the owner of the loan. Any legal action such as foreclosure that a servicer takes generally may be brought in the name and on behalf of the securitization trust, which is the legal owner of record of the mortgage loans. Several federal agencies share responsibility for regulating activities of the banking industry that relate to the originating and servicing of mortgage loans (see table 1). Upon assumption of its full authorities on July 21, 2011, CFPB also will have authority to regulate mortgage servicers with respect to federal consumer financial law. Other agencies also oversee certain aspects of U.S. mortgage markets but do not have supervisory authority over mortgage servicers. Because state laws primarily govern foreclosure, federal laws related to mortgage lending focus on protecting consumers at mortgage origination and during the life of a loan but not necessarily during foreclosure. Federal consumer protection laws, such as the Truth in Lending Act (TILA) and the Real Estate Settlement Procedures Act of 1974 (RESPA), address some aspects of servicers’ interactions with borrowers. For example, these laws require servicers to provide certain notifications and disclosures to borrowers or respond to certain written requests for information within specified times, but they do not include specific requirements for servicers to follow when executing a foreclosure. According to Federal Reserve officials, in addition to federal bankruptcy laws, federal laws that address foreclosure processing specifically are the Protecting Tenants at Foreclosure Act of 2009, which protects certain tenants from immediate eviction by new owners who acquire residential property through foreclosure, and the Servicemembers Civil Relief Act, which restricts foreclosure of properties owned by active duty members of the military. Banking regulators oversee most entities that conduct mortgage servicing, but their oversight of foreclosure activities generally has been limited. As part of their mission to ensure the safety and soundness of these institutions, the regulators have the authority to review any aspect of their activities, including mortgage servicing and compliance with applicable state laws. However, the extent to which regulators have reviewed the foreclosure activities of banks or banking subsidiaries that perform mortgage servicing has been limited because these practices generally were not considered as posing a high risk to safety and soundness. According to OCC and Federal Reserve staff, they conduct risk-based examinations that focus on areas of greatest risk to their institutions’ financial positions, as well as some other areas of potential concern, such as consumer complaints. Servicers generally manage loans that other entities own or hold, and are not exposed to significant losses if these loans become delinquent. Because regulators generally determined that the safety and soundness risks from mortgage servicing were low, they have not regularly examined servicers’ foreclosure practices on a loan- level basis. Oversight also has been fragmented, and not all servicers have been overseen by federal banking regulators. At the federal level, multiple agencies—including OCC, the Federal Reserve, OTS, and FDIC—have regulatory responsibility for most of the institutions that conduct mortgage servicing, but until recently, some nonbank institutions have not had a primary federal or state regulator. Many federally regulated bank holding companies that have insured depository subsidiaries, such as national or state-chartered banks, may have nonbank subsidiaries such as mortgage finance companies. Under the Bank Holding Company Act of 1956, as amended, the Federal Reserve has jurisdiction over such bank holding companies and their nonbank subsidiaries that are not regulated by another functional regulator. Until recently the Federal Reserve generally had not included the nonbank subsidiaries in its examination activity because their activities were not considered to pose material risks to the bank holding companies. In some cases, nonbank entities that service mortgage loans are not affiliated with financial institutions at all, and therefore were not subject to oversight by one of the federal banking regulators. In our 2009 report on how the U.S. financial regulatory system had not kept pace with the major developments in recent decades, we noted that the varying levels or lack of oversight for nonbank institutions that originated mortgages created problems for consumers or posed risks to regulated institutions. In response to disclosed problems with foreclosure documentation, banking regulators conducted coordinated on-site reviews of foreclosure processes at 14 mortgage servicers. Generally, these examinations revealed severe deficiencies in the preparation of foreclosure documentation and with the oversight of internal foreclosure processes and the activities of external third-party vendors. Examiners generally found in the files they reviewed that borrowers were seriously delinquent on the payments on their loans and that the servicers had the documents necessary to demonstrate their authority to foreclose. However, examiners or internal servicer reviews of foreclosure loan files identified a limited number of cases in which foreclosures should not have proceeded even though the borrower was seriously delinquent. These cases include foreclosure proceedings against a borrower who had received a loan modification or against military service members on active duty, in violation of the Servicemembers Civil Relief Act. As a result of these reviews, the regulators issued enforcement actions requiring servicers to improve foreclosure practices. Regulators plan to assess compliance but have not fully developed plans for the extent of future oversight. According to the regulators’ report on their coordinated review, they help ensure that servicers take corrective actions and fully implement enforcement orders. While regulatory staff recognized that additional oversight of foreclosure activities would likely be necessary in the future, as of April 2011 they had not determined what changes would be made to guidance or to the extent and frequency of examinations. Moreover, regulators with whom we spoke expressed uncertainty about how their organizations would interact and share responsibility with the newly created CFPB regarding oversight of mortgage servicing activities. According to regulatory staff and the staff setting up CFPB, the agencies intend to coordinate oversight of mortgage servicing activities as CFPB assumes its authorities in the coming months. CFPB staff added that supervision of mortgage servicing will be a priority for the new agency. However, as of April 2011 CFPB’s oversight plans had not been finalized. As we stated in our report, fragmentation among the various entities responsible for overseeing mortgage servicers heightens the importance of coordinating plans for future oversight. Until such plans are developed, the potential for continued fragmentation and gaps in oversight remains. In our report, we recommend that the regulators and CFPB develop and coordinate plans for ongoing oversight and establish clear goals, roles, and timelines for overseeing mortgage servicers under their respective jurisdiction. In written comments on the report, the agencies generally agreed with our recommendation and said that they would continue to oversee servicers’ foreclosure processes. In addition, CFPB noted that it has already been engaged in discussions with various federal agencies to coordinate oversight responsibilities. As part of addressing the problems associated with mortgage servicing, including those relating to customer service, loan modifications, and other issues, various market participants and federal agencies have begun calling for the creation of national servicing standards, but the extent to which any final standards would address foreclosure documentation and processing is unclear. A December 2010 letter from a group of academics, industry association representatives, and others to the financial regulators noted that such standards are needed to ensure appropriate servicing for all loans, including in MBS issuances and those held in portfolios of the originating institution or by other owners. This letter outlined various areas that such standards could address, including those requirements that servicers attest that foreclosure processes comply with applicable laws and pursue loan modifications whenever economically feasible. Similarly, some regulators have stated their support of national servicing standards. For example, OCC has developed draft standards, and in his February 2011 testimony, the Acting Comptroller of the Currency expressed support for such standards, noting that they should provide the same safeguards for all consumers and should apply uniformly to all servicers. He further stated that standards should require servicers to have strong foreclosure governance processes that ensure compliance with all legal standards and documentation requirements and establish effective oversight of third-party vendors. A member of the Board of Governors of the Federal Reserve System testified that consideration of national standards for mortgage servicers was warranted, and FDIC’s Chairman urged servicers and federal and state regulators in a recent speech to create national servicing standards. Most of the regulators with whom we spoke indicated that national servicing standards could be beneficial. For example, staff from one of the regulators said that the standards would create clear expectations for all servicers, including nonbank entities not overseen by the banking regulators, and would help establish consistency across the servicing industry. The regulators’ report on the coordinated review also states that such standards would help promote accountability and ways of appropriately dealing with consumers and strengthen the housing finance market. Although various agencies have begun discussing the development of national servicing standards, the content of such standards and how they would be implemented is yet to be determined. According to CFPB staff, whatever the outcome of the interagency negotiations, CFPB will have substantial rulemaking authority over servicing and under the Dodd-Frank Act is required to issue certain rules on servicing by January 2013. We reported that problems involving financial institutions and consumers could increase when activities are not subject to consistent oversight and regulatory expectations. Including specific expectations regarding foreclosure practices in any standards that are developed could help ensure more uniform practices and oversight in this area. To help ensure strong and robust oversight of all mortgage servicers, we recommended that the banking regulators and CFPB include standards for foreclosure practices if national servicing standards are created. In written comments on our report, the agencies generally agreed with this recommendation, and most provided additional details about the ongoing interagency efforts to develop servicing standards. For example, OCC noted that ongoing efforts to develop national servicing standards are intended to include provisions covering both foreclosure abeyance and foreclosure governance. OCC added that the standards, although still a work in progress, will emphasize communication with the borrower and compliance with legal requirements, documentation, vendor management, and other controls. The Federal Reserve commented that the intent of the interagency effort was to address the problems found in the servicing industry, including in foreclosure processing, and coordinate the efforts of the multiple regulatory agencies to ensure that consumers will be treated properly and consistently. FDIC noted that the agency successfully proposed the inclusion of loan servicing standards in the proposed rules to implement the securitization risk retention requirements of the Dodd- Frank Act. FDIC also noted that any servicing standards should align incentives between servicers and investors and ensure that appropriate loss mitigation activities are considered when borrowers experience financial difficulties. CFPB said it has effective authority to adopt national mortgage servicing rules for all mortgage servicers, including those for which CFPB does not have supervisory authority. Finally, Treasury said it has been closely engaged with the interagency group reviewing errors in mortgage servicing and that it supports national servicing standards that align incentives and provide clarity and consistency to borrowers and investors for their treatment by servicers. To date, a key impact of the problems relating to affidavits and notarization of mortgage foreclosure documents appears to be delays in the rate at which foreclosures proceed. Despite these initial delays, some regulatory officials, legal academics, and industry officials we interviewed indicated that foreclosure documentation issues were correctable. Once servicers have revised their processes and corrected documentation errors, most delayed foreclosures in states that require court action likely will proceed. The implications for borrowers could be mixed, but delays in the foreclosure process could exacerbate the impacts of vacant properties and affect recovery of housing prices. Borrowers whose mortgage loans are in default may benefit from the delays if the additional time allows them to obtain income that allows them to bring mortgage payments current, cure the default, or work out loan modifications. However, according to legal services attorneys we interviewed, these delays leave borrowers unsure about how long they could remain in their homes. And borrowers still might be subject to new foreclosure proceedings if banks assembled the necessary paperwork and resubmitted the cases. Communities could experience negative impacts from delayed foreclosures as more properties might become vacant. We reported that neighborhood and community problems stemming from vacancies include heightened crime, blight, and declining property values, and increased costs to local governments for policing and securing properties. Delays in the foreclosures process, although temporary, could exacerbate these problems. Various market observers and regulators indicated that the delays could negatively affect the recovery of U.S. housing prices in the long term. According to one rating agency’s analysis, market recovery could be delayed as servicers work through the backlog of homes in foreclosure. Regulators also reported that delays could be an impediment for communities working to stabilize local neighborhoods and housing markets, and could lead to extended periods of depressed home prices. Impacts on servicers, trusts, and investors because of loan transfer documentation problems were unclear. Some academics and others have argued that the way that mortgage loans were transferred in connection with some MBS issuances could affect servicers’ ability to complete foreclosures and create financial liabilities for other entities, such as those involved in creating securities. According to these academics, a servicer may not be able to prove its right to foreclose on a property if the trust on whose behalf it is servicing the loan is not specifically named in the loan transfer documentation. In addition, we note in our report that stakeholders we interviewed said that investors in the MBS issuance may press legal claims against the creators of the trusts or force reimbursements, or repurchases. Conversely, other market participants argue that mortgages were pooled into securities using standard industry practices that were sufficient to create legal ownership on behalf of MBS trusts. According to these participants, the practices that were typically used to transfer loans into private label MBS trusts comply with the Uniform Commercial Code, which generally has been adopted in every state. As a result, they argue that the transfers were legally sufficient to establish the trusts’ ownership. Although some courts may have addressed transfer practices in certain contexts, the impact of the problems likely will remain uncertain until courts issue definitive, controlling decisions. In the near term, industry observers and regulators noted that these cases and other weaknesses in foreclosure processes could lead to increased litigation and servicing costs for servicers, more foreclosure delays, and investor claims. Although tasked with overseeing the financial safety and soundness of institutions under their jurisdiction, the banking regulators have not fully assessed the extent to which MBS loan transfer problems could affect their institutions financially. According to staff at one of the regulators, as part of the coordinated review, examiners did not always verify that loan files included accurate documentation of all previous note and mortgage transfers—leaving open the possibility that transfer problems exist in the files they reviewed. The enforcement orders resulting from the coordinated review require servicers to retain an independent firm to assess these risks. Regulators will more frequently monitor these servicers until they have corrected the identified weaknesses; however, the regulators have not definitively determined how transfer problems might financially affect other institutions they regulate, including any of the institutions involved in the creation of private label MBS. With almost $1.3 trillion in private label securities outstanding as of the end of 2010, the institutions and the overall financial system could face significant risks. To reduce the likelihood that problems with transfer documentation could pose a risk to the financial system, we recommended that the banking regulators assess the risks of potential litigation or repurchases due to improper mortgage loan transfer documentation on institutions under their jurisdiction and require that the institutions act to mitigate the risks, if warranted. Completing the risk assessments and fully ensuring that regulated institutions proactively address the risks could reduce the potential threat to the soundness of these institutions, the deposit insurance fund, and the overall financial system. In written comments on a draft of our report, the regulators generally agreed with or did not comment on this recommendation. For example, FDIC strongly supported this recommendation and noted its particular interest in protecting the deposit insurance fund. In addition, the Federal Reserve said that it has conducted a detailed evaluation of the risk of potential litigation or repurchases to the financial institutions it supervises and will continue to monitor these issues. Chairman Menendez, Ranking Member DeMint, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff have any questions about matters discussed in this testimony, please contact A. Nicole Clowers at (202) 512-4010 or [email protected]. Other key contributors to this testimony include Cody Goebel (Assistant Director), Beth Garcia, Jill Naamane, and Linda Rego. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses our work on mortgage servicing issues. With record numbers of borrowers in default and delinquent on their loans, mortgage servicers--entities that manage home mortgage loans--are initiating large numbers of foreclosures throughout the country. As of December 2010, an estimated 4.6 percent of the about 50 million first-lien mortgages outstanding were in foreclosure--an increase of more than 370 percent since the first quarter of 2006, when 1 percent were in foreclosure. Beginning in September 2010, several servicers announced that they were halting or reviewing their foreclosure proceedings throughout the country after allegations that the documents accompanying judicial foreclosures may have been inappropriately signed or notarized. The servicers subsequently resumed some foreclosure actions after reviewing their processes and procedures. However, following these allegations, some homeowners challenged the validity of foreclosure proceedings against them. Questions about whether documents for loans that were sold and packaged into mortgage-backed securities were properly handled prompted additional challenges. This statement focuses on (1) the extent to which federal laws address mortgage servicers' foreclosure procedures and federal agencies' authority to oversee servicers' activities and the extent of past oversight; (2) federal agencies' current oversight activities and future oversight plans; and (3) the potential impact of foreclosure documentation issues on homeowners, servicers, regulators, and investors in mortgage-backed securities. It is based on the report we issued on May 2, 2011, on foreclosure documentation problems that Congress requested. In summary, until the problems with foreclosure documentation came to light, federal regulatory oversight of mortgage servicers had been limited, because regulators regarded servicers' activities as low risk for banking safety and soundness. However, regulators' recent examinations revealed that servicers generally failed to prepare required documentation properly and lacked effective supervision and controls over foreclosure processes. Moreover, the resulting delays in completing foreclosures and increased exposure to litigation highlight how the failure to oversee whether institutions follow sound practices can heighten the risks these entities present to the financial system and create problems for the communities in which foreclosures occur. As a result, we recommended in our report that the financial regulators take various actions, including (1) developing and coordinating plans for ongoing oversight of servicers, (2) including foreclosure practices as part of any national servicing standards that are created, and (3) assessing the risks of improper documentation for mortgage loan transfers. The regulators generally agreed with or did not comment on our recommendations, and some are taking actions to address them. |
The Social Security Act was enacted in 1935 during the Great Depression as a social insurance program to provide an income foundation upon which individuals could build for their retirement years. In 1956, the DI program was added to Social Security to provide income to disabled workers. Over the years, the three main components of retirement income—Social Security, pensions, and savings—have dramatically improved the income of the elderly, thereby substantially reducing their poverty rates. According to SSA data, Social Security benefits constitute approximately 80 percent of total income for elderly households (households in which the head of household is aged 65 or older) in the lowest two-fifths of the income distribution, compared with only 21 percent of total income for households in the highest fifth. The Social Security Act established 65 as the minimum age at which retirement benefits can be obtained. Sixty-five was selected as a compromise between age 60, which appeared too low from a cost standpoint, and age 70, which appeared too high given that life expectancy at the time was 59 years for men and 63 years for women. Since 1956, women have had the option to take reduced benefits at age 62, and since 1961, this option has also been available to men. As a result, 62 has been defined as the ERA and 65 is considered the NRA. The long-term financing problem that Social Security faces is largely a result of lower birth rates and increasing longevity. One way to at least partially compensate for these changes is to raise the retirement ages. The Congress has already approved one change in the retirement age, in 1983, when it enacted legislation that phased in an increase in the NRA to 67 over a 22-year period beginning in the year 2000. Currently, there are proposals before the Congress to raise the retirement ages further by increasing the ERA from 62 to 65, along with several proposals to further increase the NRA from 67 to 70. Longer life expectancy and the improved health of the nation’s elderly are the primary justifications for these recommended increases. Raising the retirement ages effectively reduces benefits and thereby would improve Social Security’s solvency. The extent of the improvement depends on how much and how soon the retirement ages are raised. Because individuals retiring before the NRA receive lower benefits and those retiring after the NRA receive a premium, raising the NRA reduces the initial benefits for all retirees. For example, if the NRA was increased to 70, people who retire between ages 65 and 69 would have their benefits reduced for early retirement. And those who retire at age 70 would then receive the basic benefit amount now received at 65 instead of receiving the premium for delayed retirement. SSA’s actuaries estimate that increasing the NRA from 65 to 69 over the years 2000 through 2015, and raising the ERA at the same rate, would close over one-half of the long-term trust fund shortfall and thereby extend the period of projected solvency by 13 years. If the NRA and ERA were further increased at the rate of 1 month every 2 years starting in 2016, then depletion of the fund would not occur for an additional 5 years (because 19 percent more of the shortfall would be made up). The combined effect of these retirement age increases would eliminate 72 percent of the difference between the Social Security trust fund’s revenues and outlays over the next 75 years. Raising the retirement ages also could lead to an increase in economic activity if people worked longer. By remaining in the work force, older workers would be increasing the number of their productive years. In effect, there would be an increase in the economy’s resource base—in this case, society’s stock of human resources—and these increased resources would allow the economy to produce more goods and services. However, the increase in economic activity assumes that, by remaining in the labor force for more years, older workers would not be displacing younger workers . Raising the Social Security retirement ages would provide many individuals an incentive to work longer, but whether they do depends on how the labor market responds. Having people work longer would help solve the problem of the declining ratio of workers to retirees. Working longer could also give workers more time to save and to accrue pension benefits. Still, it is unclear whether workers will want to work longer and whether employers will want to retain or hire them. For many years, Americans have been choosing to receive Social Security benefits earlier, although the decline in the average age at which people elect to receive benefits has leveled off since the 1980s. In 1940, the average age for drawing Social Security benefits was 68.8, but by 1985 it had fallen to 63.7, where it remains today. Less than one-sixth of men aged 65 and over are in the labor force today, compared with nearly half in 1950. In addition, life expectancies have increased by nearly 12 years for men and 14 years for women since 1940. The combination of decreasing retirement ages and increasing life expectancies means that people are spending an increasing proportion of their lives in retirement. Data from the Survey of Income and Program Participation (SIPP) shows that approximately 22 to 31 percent of men aged 62 to 67 report that they have a disability that limits their ability to work. These data suggest that although a substantial portion of the population may have difficulty continuing to work to later ages, the majority of people have the capability to work beyond the current ERA and NRA. Social Security policy is a factor that affects individuals’ choice of when to retire. Social Security currently gives incentives for individuals to reduce their working hours once they reach ages 62 or 65. Individuals make their decisions to work based primarily on the trade-off of earnings versus leisure time. The availability of Social Security benefits allows workers to substitute their earnings with nonlabor income and to take more leisure time. The majority of workers (53 percent) take Social Security benefits at age 62, the first year they are eligible. Also, individuals tend to retire more often at ages 62 and 65 than at any other ages, suggesting that the ERA and NRA influence the decision on when to retire. household wealth, and the employee’s health status. Research suggests that the decision to retire is based primarily on financial considerations. One recent study, by Burkhauser and others, examined the effects of raising the ERA and concluded that such an increase would have only a limited impact on individuals in poor health because the majority of people who retire at the ERA do so because they are financially able to do it. This study suggests that raising the ERA would, on average, deny Social Security benefits to people who could work longer and not take benefits away from unhealthy individuals who retire early because they can no longer work. This research concludes that raising the ERA and the NRA should lead to individuals working longer, but those who cannot work longer may see their household income decline. In households with two or more income earners, the healthy member(s) of the household may be able to work longer to offset some or all of the lost Social Security benefits. However, households without this option could experience large declines in their income if the retirement ages are raised. For some households, this decline in income could be sufficient to push the household below the poverty level. the hiring of older workers. The researchers who found this negative correlation speculated that it is the result of the Age Discrimination in Employment Act (ADEA), which mandates that firms must offer workers with similar experience the same level of benefits. Since younger employees are less costly to insure, firms will prefer them. The potential tenure with an employer is another obstacle to hiring older workers because of recruitment and training costs. Recruitment involves job advertising costs and interview time. Newly hired employees may also require significant training to perform their new job. If these costs are substantial, they can serve as barriers to hiring older workers. Firms would be more likely to invest in younger workers because they have the potential to remain with the firm for a longer period, which reduces the average costs of recruitment and training. A final obstacle that older workers face is a negative perception among employers about their productivity. Surveys find that most managers believe the negative aspects of older workers outweigh the positive aspects. The productivity traits of older workers that managers tend to find favorable are experience, judgment, commitment to quality, low turnover, and good attendance and punctuality. The negative perceptions that managers have about older workers’ productivity are a tendency toward inflexibility, an inability to effectively use new technology, difficulty in learning new skills, and concerns about physical ability. situation, rather than a desire to retire, could discourage an older worker from remaining in the labor force. Blue-collar workers will likely experience more difficulties in extending their careers than will white-collar workers. Because of the nature of their jobs, many older blue-collar workers—who compose 40 percent of the labor force between the ages of 53 and 63—experience health problems that may inhibit their ability to work and reduce the demand for their labor. We analyzed the Health and Retirement Study (HRS), a nationally representative sample composed of individuals born between 1931 and 1941, to compare the health status of blue- and white-collar workers. Our analysis found that older blue-collar workers are at greater risk for having several health problems compared with older white-collar workers (see table 1). We assessed the effects of occupation on specific health problems, controlling for employment status, age, race, sex, alcohol consumption, and smoking. Blue-collar workers are more likely to have musculoskeletal problems, respiratory diseases, diabetes, and emotional disorders than are white-collar workers. For example, blue-collar workers are 58 percent more likely to have arthritis, 42 percent more likely to have chronic lung disease, and 25 percent more likely to have emotional disorders. White-collar workers were not at greater risk for having any of the health problems we examined. White-collar workers did have higher rates of cancer; however, the difference was not statistically significant. Social Security Reform: Raising Retirement Ages Improves Program Solvency but May Cause Hardship for Some Cancer (other than skin) When all blue-collar occupations are grouped together, blue-collar workers are 80 percent more likely than white-collar workers to experience pain that affects their ability to perform their jobs (see table 2). The blue-collar occupations with risk factors for pain affecting performance are personal services; farming, fishing, and forestry; mechanics and repair; construction; mining; precision production; machine operator; transportation operator; and material handler. These occupations comprise one-third of workers aged 53 to 63. Older blue-collar workers with health problems have lower earnings and are in less demand for their labor. Blue-collar work is often physically demanding, and current or potential employers may foresee a risk of a worker’s compensation claim or increased health care costs from older employees. This reduced labor demand means these workers may accumulate less wealth, which makes it difficult for them to afford to retire even if they are not physically capable of working more years. For example, 18 percent of blue-collar workers with two or more health problems are retired, while only 14 percent of those with no problems are retired (see table 3). Table 3 shows that older blue-collar workers with health problems had higher unemployment rates than healthy blue-collar workers. Our analysis also showed that blue-collar workers had higher unemployment rates than white-collar workers with similar health status. Corresponding to these higher unemployment rates, the blue-collar workers with health problems had lower earnings. The older blue-collar workers who had arthritis, a foot or leg problem, chronic lung disease, asthma, diabetes, or an emotional problem—all conditions that blue-collar workers are at greater risk for having compared with white-collar workers—have 38 percent, 33 percent, 27 percent, 36 percent, 25 percent, and 78 percent lower median earnings, respectively, than blue-collar workers without these conditions. As noted earlier, these reduced earnings make it difficult for unhealthy, older blue-collar workers to afford to retire. benefits awarded at that age. Some of the individuals with low income and assets who are awarded DI may also qualify for SSI disability benefits. Another incentive for individuals to apply to the DI program is that participants are eligible for medical coverage under Medicare 2 years after DI benefits begin. Thus, individuals awarded DI benefits before age 63 get extra Medicare coverage that they would otherwise not be eligible for until age 65. Therefore, if Medicare eligibility was raised along with the ERA and NRA, individuals would have an incentive to try to attain DI benefits. An additional medical coverage issue is that individuals who are dually eligible for DI and SSI benefits are also generally eligible to receive Medicaid, which will increase costs to this program. Raising retirement ages would change some of the disincentives that currently keep people from applying for DI benefits at age 62. Data from SSA show that the current structure of Social Security reduces claims for new DI participators aged 62 to 64. Figure 1 shows a steady increase in the rate of new disability awards from ages 53 to 61. The rate of new awards then drops substantially at age 62 and falls further through age 64. DI participation is likely discouraged at ages 62 to 64 because of the application process and restrictions on earnings. There is a 5-month waiting period after the onset of the disability until someone can apply for benefits, and the application process is lengthy and complex. In comparison, the application process for Social Security retirement benefits is more straightforward, given that the applicant meets the coverage and age requirements. In addition, DI benefits are generally subject to a greater reduction than Social Security retirement benefits if beneficiaries have any earnings. Also, DI benefits are offset by worker’s compensation benefits, while Social Security retirement benefits are not. If the ERA was raised to 65 and the NRA to 70, then the incentives that apply to Social Security retirement benefits would be applicable at age 65 rather than age 62. Under this scenario, individuals aged 62 to 64 would have a greater incentive to apply for disability benefits, and they would be expected to do so at rates comparable to individuals at younger ages (55 to 61) under the present system. Figure 1 contains a trend line to indicate the expected rate of change if the increase in new DI participation continues beyond age 62. The trend in new DI participation among individuals aged 55 to 61 under the present system suggests that DI participation among individuals aged 62 to 64 would increase approximately 2.5 percent if the ERA was raised to age 65. As noted earlier, some of these new DI participants would be dually eligible for SSI and Medicaid benefits, which would impose additional costs. Addressing Social Security’s solvency problem is one of the most important issues currently facing the administration and the Congress. Numerous proposals are before the Congress to restore the balance between promised benefits and available funds. Increases in the ERA and NRA could make up a substantial amount of Social Security’s long-term financing shortfall, depending on the size of the increases. Increases in retirement ages may also have positive economic effects by inducing individuals to extend their careers, which could increase economic output. Since life expectancies and the health of the elderly are improving, many people have the capability to work longer, and increasing retirement ages would encourage this. While raising the retirement ages will extend the life of the Social Security trust fund and could lead to higher levels of economic output, the potential negative consequences should be recognized. For example, older workers who are laid off or need to reenter the workforce after retiring may have difficulty finding a job. Blue-collar workers may experience these problems to a greater degree, because the nature of their work leads to several health problems that inhibit their ability to continue working to later ages, compared with those in white-collar jobs. These health problems reduce their employability and hence their ability to accumulate enough wealth to afford to retire if they are not physically capable of working longer. Finally, in considering retirement age increases, the effect of this action on other government programs needs to be understood. Participation in disability insurance programs will likely increase, primarily by blue-collar workers, if retirement ages are raised. The magnitude of the increase depends on the extent to which individuals react to the newly created incentives to apply to these programs. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed raising the retirement age for social security benefits, focusing on: (1) how raising the retirement ages could affect social security's long-term solvency and the U.S. economy; (2) how the labor market for older workers might respond to these changes; and (3) the possible impacts from raising the retirement ages on the Disability Insurance (DI) and Supplemental Security Income (SSI) programs. GAO noted that: (1) raising the retirement ages does appear to improve the social security program's long-term solvency and could increase the nation's economic output; (2) raising the ages at which individuals can draw benefits creates incentives for workers to remain in the labor force, thereby increasing revenues to the trust fund and decreasing the amount of benefits paid; (3) the majority of older workers, aged 62 to 67, do not appear to have health limitations that would prevent them from extending their careers, and thus their labor force participation should increase as the retirement ages are raised; (4) this greater labor force participation should raise the level of economic output as more people work longer; (5) however, the extent to which labor force participation increases depends on whether sufficient jobs are available for older workers; (6) employees may be willing and able to extend their careers, but it is unclear whether employers will be willing to retain or hire them because of negative perceptions about costs and productivity; (7) blue-collar workers may be disproportionately affected by these labor demand and supply factors because they are at greater risk for incurring certain health problems that could limit their ability to remain in the labor force; (8) for example, workers in poor health who otherwise might have kept working until they qualified for social security retirement benefits may opt to apply for DI, which could increase costs to this program; and (9) in addition, SSI could also experience increased participation and higher costs because some individuals will be dually eligible for DI and SSI. |
The Kennedy Center opened in 1971 and is located on 17 acres along the Potomac River in Washington, D.C. The Center houses four major theaters and several smaller theaters, five public halls or galleries, educational facilities, rehearsal spaces, offices, and meeting rooms in about 1.1 million square feet of space. The Kennedy Center also has a recently expanded parking garage. The Center is open 365 days a year, and nearly 5 million people visit it annually to attend performances or tour the facility. In 1972, the National Park Service, within the Department of Interior, assumed responsibility for services related to the nonperforming arts functions of the Kennedy Center facility, whereas the Kennedy Center Board of Trustees (Board) retained responsibility for all performing arts activities. Under this dual management, the Kennedy Center facility suffered from severe deterioration and a backlog of capital repairs in part because responsibility for identifying and completing capital repairs and improvements at the Center was unclear. As a result, legislation was enacted in 1990 that directed the National Park Service and the Board to enter into a cooperative agreement setting forth their responsibilities relating to maintenance, repair, and alteration of the Center. However, after the parties were ultimately unable to agree on a methodology to enter into the cooperative agreement, which would have been the foundation of a capital improvement plan, legislation was enacted in 1994 that gave the Board sole responsibility for carrying out capital improvement projects at the Kennedy Center facility. A purpose of the legislation was to provide autonomy for the overall management of the Kennedy Center, which included better control over its capital projects and bringing the Kennedy Center building from a state of deterioration to a condition of excellence. The legislation further required the Board to develop and annually update a comprehensive building needs plan. In response to the legislation, the Kennedy Center developed a Comprehensive Building Plan (Building Plan) in 1995 to detail the existing condition of the Kennedy Center facility and planned renovations. The goals of the renovations were to address accessibility and life safety code deficiencies, such as the installation of sprinklers throughout the Center, replace inefficient building systems, and improve visitor services. The original Building Plan anticipated that the capital projects at the Kennedy Center would be completed in two stages. Projects in the first stage—fiscal years 1995 through 1999—would address critical issues to protect the building from water intrusion, provide critical security and life safety measures, and provide improved accessibility. Projects undertaken in the second stage—fiscal years 2000 through 2009—would eliminate the backlog of deferred capital repair projects. However, the Kennedy Center changed its approach to renovating the Center. Rather than undertaking broad-scale projects that could disrupt the entire Center, the Kennedy Center has taken certain areas or theaters out of service and performed all of the necessary renovations in a particular area at one time. For example, rather than installing a new sprinkler system throughout the entire Center, which would have closed multiple theaters simultaneously, the Center is installing sprinklers in each theater as it is renovated. Thus, only one theater is closed at a time. According to Center officials, this approach minimizes the disruptions to ongoing operations in other areas of the Kennedy Center. When the Opera House was renovated, for example, it was closed for almost a year but performances continued in all of the other theaters. The Kennedy Center receives federal funding annually for capital improvement projects based on its Building Plan. In fiscal year 2004, the Kennedy Center received approximately $16 million in federal funds for capital improvement projects. Revenue generated by performances at the Center are used for costs associated with the performances and are not used for capital projects in the Building Plan. In addition to federally funded projects, the Building Plan also discusses other major projects that are being funded with private donations or other nonfederal funding sources, including the recent garage expansion and the proposed plaza project adjacent to the Kennedy Center facility. The plaza project, which will connect the Center to the National Mall, will relocate roadways to improve transportation and pedestrian accessibility to and from the Kennedy Center and surrounding streets. It will include a central fountain that runs from 23rd Street NW to the Kennedy Center, a pedestrian walkway, and a connection to the waterfront. In addition, the plaza will include two proposed buildings with about 200,000 gross square feet each, located on opposite sides of the central fountain (see fig. 1). The Kennedy Center plans for one building to house an exhibition devoted to the history of performing arts in America, include office space for the Kennedy Center staff, and be used as an education center for the performing arts. The other building will be used as rehearsal space for the Washington National Opera and the Kennedy Center and include some additional office space for Kennedy Center staff. These buildings will be constructed with private donations, and upon completion, the Kennedy Center Board will own, operate, and maintain the buildings and green space established on the plaza. In fiscal year 2003, the Center received a pledge of $100 million to be used toward the construction of the planned plaza buildings. According to Kennedy Center officials, they are in the preliminary stages of designing and estimating the cost of the new buildings. Construction of the plaza project is expected to begin in fiscal year 2010, pending federal funding, and the Kennedy Center is expected to begin occupying the buildings in fiscal year 2013. The 1994 legislation that gave the Kennedy Center responsibility for capital projects also authorized the Board to carry out the day-to-day operations and maintenance activities for the Kennedy Center facility. Operations and maintenance funds are used to cover expenses for utilities, security, daily cleaning, and maintenance, among other things. In fiscal years 2003 and 2004, the Kennedy Center received about $16 million in federal appropriations each year for the operations and maintenance of the current facility. Federal appropriations are not used for performance-related expenses. The Kennedy Center’s total operating expenses in fiscal year 2003 were about $118 million. The Kennedy Center generates the majority of its revenue from programs at the Center, contributions, and investments. The Kennedy Center has received approximately $152 million in federal appropriations for capital projects since it took responsibility for these projects in fiscal year 1995. In every fiscal year since 1995, the Kennedy Center has generally received the federal appropriations it requested. For example, in fiscal year 2004, the Kennedy Center requested $16 million in federal appropriations for capital projects and received $15.8 million after rescissions to the budget authority were taken into account. (see table 1). As shown in table 1, the Kennedy Center requested approximately $9 million annually for capital projects for fiscal year 1995 through fiscal year 1998. The Center generally received what it requested in each of these years, but the actual funding available was reduced in some of these years due to a rescission of budget authority. In fiscal year 1997, additional appropriations were provided to the Kennedy Center to address antiterrorism requirements. In fiscal year 1999, the Kennedy Center requested and received $20 million in federal appropriations for capital projects, an increase from the $9 million it originally anticipated receiving in its initial 1995 Building Plan. This increase in funding was given to the Kennedy Center to address several critical projects over a 3-year period. Center officials requested the increased funding on the basis of studies conducted by its architecture and engineering consultants, who concluded an increase in “up front” funding could lead to overall cost savings on the Kennedy Center renovation in the long term. For fiscal years 2002 through 2004, the Kennedy Center gradually reduced its annual funding request from $20 million; however, the total requested funding for these years was about $17 million more than was anticipated in the initial 1995 Building Plan. The Kennedy Center uses its Building Plan to communicate to Congress its planned capital improvement projects and to provide budget estimates for carrying out these projects. The Kennedy Center further describes its planned capital projects and requests federal appropriations for these projects in its annual budget justifications to Congress. Both the Building Plan and budget justifications present budget estimates for broad categories of projects, such as interior repair, accessibility, and egress, but they do not include budget information for specific projects. Unlike the General Services Administration, the Kennedy Center receives a lump sum appropriation for capital projects, and the appropriations are not dedicated to specific projects. The Kennedy Center has the flexibility to change the projects or sequence of projects it plans to fund on the basis of such factors as the need to minimize disruptions to the operations of the Center and budget constraints. The Kennedy Center has requested about $16 million for capital projects in fiscal year 2005. The 2002 Building Plan anticipates the Kennedy Center will receive another $41 million in appropriations through fiscal year 2008 to carry out its planned capital projects, for a total of $209 million. This is consistent with the funding amounts anticipated in the 1997 Building Plan; however, it is $44 million more than was anticipated in the initial 1995 Building Plan to accomplish the same goals. The Kennedy Center has completed or has ongoing 100of the projects it identified in its initial Building Plan and its updates, and has decided not to implement or has postponed 15 of the identified projects. Seventeen projects are planned for fiscal years 2005 through 2008 and include several large projects with life safety components, such as the installation of sprinklers. We believe it is unlikely that the Center will be able to complete the planned projects by the end of fiscal year 2008 or within the appropriation amounts anticipated in the current Building Plan. Several large projects remain to be done because, in part, the Kennedy Center changed the order of projects to minimize disruption to the operations of the Center. The budget estimates for the capital projects planned through fiscal year 2008 are preliminary and will likely increase as the projects are designed. Furthermore, the Comprehensive Building Plan has not been updated annually as required, and it does not provide specific project status and budget information. This limits the usefulness of the plan and inhibits Congress’s ability to know the impact of funding decisions or judge the performance and progress of the Center’s capital projects. Since the first Building Plan was developed in 1995, 132 capital projects have been identified in the plan, and the updates to the plan, to address the deterioration and backlog of capital repairs, and an additional 12 capital projects that were not in the plan have been completed. (See app. II for a list of the specific projects and their status.) The completed projects that had not been identified in the Building Plan were relatively small projects totaling $1.2 million, and include such projects as kitchen repairs and engraving restoration. The Building Plan lacks individual project information necessary to determine if projects are being completed within the original budget estimates and on schedule. Table 2 shows the status of all of the Kennedy Center’s projects since fiscal year 1995. Seventy-four capital projects have been completed at a total cost of about $98 million. Examples of major projects completed include the replacement of chillers; renovation and installation of sprinklers in the two largest theaters—the Concert Hall and Opera House; and installation of a new fire alarm system throughout the building. Figure 1 shows the Opera House during and after the renovation. The Kennedy Center has also completed many smaller projects ranging from the installation of safety rails on the roof terrace to new directional signs in the Center. Thirty-eight capital projects are currently ongoing at a total estimated cost of $67.2 million. Many of these projects, originally planned to begin in different fiscal years, have been combined into single projects for implementation. For example, 15 projects reported in the Building Plan, originally planned to begin as early as fiscal year 1998, have been combined under the one current site improvements project, beginning in fiscal year 2003. Kennedy Center officials expect this project to be completed by the end of calendar year 2004. The ongoing projects include elevator modernization, the installation of sprinklers in areas outside of the theaters, and a smoke evacuation system in the Grand Foyer and Halls of State and Nations. According to Center officials, some of these projects are in the design phase, and their actual costs could increase. Seventeen projects are planned for future years (through fiscal year 2008), with initial budget estimates totaling over $38 million for 15 of the projects. The Kennedy Center did not provide an estimate for 2 of these projects— design and restoration of the windows on the roof terrace level—because officials expect the scope of the restoration project to change significantly based on early design work for window restoration on other levels. Major projects planned for the future include renovations of the Family Theater, Eisenhower Theater, and the Terrace Theater. Kennedy Center officials have cautioned that initial budget estimates are preliminary and are expected to change as the projects are designed. In addition, project estimates are based on the year the project is expected to start; as projects are postponed, costs are expected to increase. Fifteen projects will not be implemented or have been postponed beyond fiscal year 2008. Eleven of these projects, including the relocation of a theater, will not be implemented because Center officials have determined the projects were not financially viable or were no longer needed. The Kennedy Center spent about $600,000 studying two of the projects it decided not to implement. The other 4 projects, related to office renovations and public space improvements, have been postponed because other projects have higher priority. While the Kennedy Center decided to postpone or not implement these 15 projects, the Building Plan did not reflect any changes in the amount it reported as necessary to implement the plan. Given the number and size of the renovation projects that remain to be done and the current likelihood that project estimates may increase, we believe it is unlikely the Kennedy Center will be able to fully implement its Building Plan with the anticipated future appropriations by the end of fiscal year 2008. Each year, the Kennedy Center receives federal funding for capital projects at the Center that is not tied to specific projects. Although the Building Plan includes a proposed construction order for the projects, the Kennedy Center has the flexibility to change the sequence of projects or change specific projects that will be done in any given year. According to Kennedy Center officials, capital projects are prioritized according to a combination of factors, including (1) life safety issues, (2) risk and impact to patrons and staff, (3) needed upgrades to the building systems, (4) theater accessibility, and (5) the need to minimize disruptions to the Center’s operations. As a performing arts center, in fiscal year 2003, the Kennedy Center generated about 70 percent of its income from performances and programs held at the Center and from contributions. Center officials stated that it must continue operations during renovations to the extent possible to continue generating revenue. The need to minimize disruptions to the Center’s operations appeared to be the key consideration when determining the order of capital projects. To minimize disruptions to the Center’s operations and patrons, the Kennedy Center changed its original approach of doing critical life safety projects by the end of fiscal year 1999 to renovating the Center a particular area at a time. For example, the recent renovation of the Opera House included all necessary projects in the theater, such as the removal of asbestos, the installation of a sprinkler system, and the installation of new wall coverings. This approach is less disruptive to the operations of the Kennedy Center; however, many of the life safety projects that the initial Building Plan anticipated would be completed by the late 1990s, although currently ongoing, will not be completed until fiscal year 2006. In addition, three theaters—Family, Eisenhower, and Terrace—still remain to be renovated, including the installation of sprinklers. The renovation of the Family Theater is currently being designed and the Kennedy Center plans to complete this renovation in fiscal year 2005. The Eisenhower Theater renovation is currently in the preliminary design phase. The renovation of the Eisenhower Theater was originally planned for fiscal year 2006, but according to Center officials, the actual renovation has been postponed until fiscal year 2007 or 2008. Finally, the 2002 Building Plan reports that the complete renovation of the Terrace Theater will not be completed until after fiscal year 2008 but indicates complete sprinkler coverage and accessible railings would be added to the theater by the end of fiscal year 2008. Finally, we believe the funding anticipated in the Building Plan may not be sufficient to complete all of the planned projects. Since fiscal year 1995, the Kennedy Center has received almost $152 million for capital projects, and the Center anticipates another $57 million in appropriations for capital projects, through fiscal year 2008, for a total of $209 million. As noted earlier, this is $44 million more than was anticipated in the initial 1995 Building Plan to accomplish the same goals. As of February 29, 2004, the Center has spent over $98 million since fiscal year 1995 on the capital projects it has completed so far or studied but did not implement. It estimates the remaining projects will cost almost another $106 million to complete, for a total of about $204 million. Although the current project budget estimates fall within the anticipated appropriations, many of these estimates are based on preliminary or no design work and are expected to change as the project design is refined and construction begins. According to the Construction Industry Institute, actual project costs may vary by as much as 30 percent to 50 percent from project estimates developed in the early stages of design. The Building Plan is of limited use in understanding the Kennedy Center’s progress in implementing its plan to renovate the Center because it does not include the status of projects identified in prior plans or provide budget information for individual projects. Instead, the plan includes a proposed sequence of work that lists the projects expected to be implemented each fiscal year through fiscal year 2008. In addition, budget information is provided only at a summary level for seven broad categories and not for individual projects. For example, the Building Plan shows that in fiscal year 2004 the Center planned to spend $7.4 million on life safety and security but does not show the amounts budgeted to individual projects such as the installation of smoke evacuation systems. Our 1998 Executive Guide on Capital Planning highlights the importance of sound capital planning, noting that clear communication and good data are essential to supporting sound capital planning and decision making. The Building Plan does not clearly explain how the Center prioritizes and restructures capital projects. For example, the Kennedy Center combined several life safety projects identified in the Building Plan into one project that is currently under way and the Center expects to be completed in fiscal year 2006. The Building Plan had originally identified some of these projects to be started as early as 1996. While the Building Plan updates state that projects may be combined, they do not clearly communicate the decision to combine these projects and that this decision would delay the Center’s progress in meeting life safety codes. In addition, Center officials have said that the potential of future federal funding below the levels identified in the Building Plan will require some projects to be delayed. Without sufficient information in the Building Plan on the prioritization of projects, congressional decision makers will not be able to gauge the Center’s progress in implementing the Building Plan or the impact of funding decisions on individual capital projects. The Kennedy Center reports monthly to the Office of Management and Budget (OMB) the status of individual projects and budget information for ongoing projects. Specifically, for each project, it includes the estimated budget, expenditures to date, and changes in project schedule, which could be used to determine if these projects are on budget and on schedule. However, the report does not include information for projects in future years. Including information on planned projects as well as ongoing projects in the Building Plan would ensure that the Kennedy Center is held accountable for the cost and schedule of its capital projects and is achieving the goals of the Building Plan. Such information could also help the Kennedy Center Board support its requests for appropriations and explain the potential effect on the implementation of the Building Plan if lesser amounts are appropriated. In addition to lacking sufficient information on which to gauge the Kennedy Center’s progress in implementing the Building Plan, the Building Plan has not been updated annually as required in the John F. Kennedy Center Act Amendments of 1994. The Kennedy Center recognizes that annually updating and implementing the Building Plan could help guard against a recurrence of severe deterioration of the facility and over the long term should ultimately reduce the public costs of operating and maintaining the monument. According to a Kennedy Center official, the Center has continued to implement the December 2002 Building Plan but has not updated it because officials did not believe there had been significant changes at the Center and the plan was still applicable. The Kennedy Center has recently hired a new Director of Capital Projects, who expects to issue an update to the Building Plan by the end of 2004. In fiscal year 2004, the Kennedy Center received about $16 million for operations and maintenance costs for the existing Center and will likely need additional federal appropriations for O&M expenses when the new plaza project is complete. We calculate that annual additional O&M costs could range from $6 million to $11 million, in current dollars, for the proposed new plaza and two buildings. However, this preliminary estimate will likely change before fiscal year 2013, the first year annual O&M funds may be needed. The plaza project is in the early planning phase, and decisions made about the project through the design phase will affect actual O&M costs. For example, the planning phase could result in smaller buildings and a plaza with less square footage, reducing O&M costs. The Kennedy Center currently receives federal appropriations for O&M costs at the existing Center. O&M refers to activities that keep a facility running on a daily basis and routine maintenance required based on the use of the facility. Specifically, O&M includes costs for such items as utilities, daily cleaning and maintenance for the building and grounds, minor repair and maintenance, security, and salaries for support staff. In fiscal year 2004, the Kennedy Center received over $16 million in federal funds for O&M related to the existing Center. On the basis of data from a survey of museum facility management practices and Kennedy Center data, we calculate that the potential O&M costs for the proposed plaza project could range from $6 million to $11 million, in current dollars. Kennedy Center officials said they have not formally estimated O&M costs for the proposed plaza project because it is in the early planning phase and decisions on the design of the buildings, which can affect O&M costs, have not been finalized. They expect to estimate O&M costs after the project is designed. However, for purposes of our report, Kennedy Center officials used current O&M costs and O&M cost data obtained from an existing survey of six museums in Washington, D.C., to estimate that O&M costs for the proposed project could range from $15 to $20 per gross square foot of space, in current dollars. Using a slightly different set of assumptions, we estimated O&M costs for the proposed project could be $28 per gross square foot of space. O&M costs are usually estimated on the basis of cost per square foot; if the size of the project changes, the O&M estimate is easily adjusted. O&M rates also vary by type of space because different types of space have different maintenance needs. For example, cleaning and maintaining private office space is generally less expensive than cleaning and maintaining space open to the public that gets more traffic and would require items such as carpeting to be replaced more often. Building industry data is available for the average O&M cost for office space. However, the current plaza project plan indicates that a combination of office, museum quality exhibition space, and rehearsal space will be included in the proposed buildings. Given the combination of the different types of space in the proposed buildings, we estimated potential O&M costs per square foot for the proposed project on the basis of (1) O&M costs for the current Kennedy Center building, (2) a 2002 museum benchmarking survey of facility management practices, and (3) Kennedy Center officials’ estimates. Using these sources, estimates for the potential O&M costs of the buildings were developed as follows: Kennedy Center officials determined that the minimum potential rate for O&M would be based on the rate for the current building. The Center currently pays about $15 a gross square foot for O&M, based on the size of the current facility (1.1 million gross square feet). Kennedy Center officials provided O&M data for six Washington, D.C., museums based on a 2002 museum benchmarking survey of facility management practices conducted by Facility Management Services Ltd., a consulting practice specializing in facility management. The survey obtained information on the costs per square foot of space for five categories of O&M services—janitorial, utilities, building maintenance, exterior grounds maintenance, and building security—for each of the six museums. The O&M costs per square foot of space for each of these categories varied widely among the museums. For example, building maintenance costs ranged from $3.38 to $22.88 per square foot of space. Building maintenance costs can vary depending on the type of building materials used to construct the building and the type of equipment inside the building. Similarly, building security costs ranged from $2.66 to $23.43 per square foot of space. Factors that could affect building security costs include the value of the museum’s contents (e.g., fine art) and the location of the facility. Because the proposed new plaza buildings are still in the early planning phase and many factors, including the size of the proposed buildings and the types of building materials used, could change before designs are finalized, we estimated the potential O&M rate for the proposed buildings by averaging the aggregate O&M costs for the six museums. This resulted in an O&M rate of $28 per gross square foot. Kennedy Center officials also estimated the potential O&M rate for the proposed buildings using the museum data described above, and adjusted the data based on such factors as the size of the other museums relative to the proposed new buildings and plaza, estimates from the Kennedy Center’s current janitorial provider, and the difference in security levels needed at the two proposed buildings. In addition, Kennedy Center officials discounted the information from the museums with the highest and lowest O&M costs and projected O&M costs on the basis of data from the other four museums. As a result, Kennedy Center officials estimated that the average O&M rate for the two buildings could be $20 per gross square foot. On the basis of the potential O&M rates—$15, $20, and $28 per gross square foot—and the current proposed size of the two new buildings—a total of about 402,000 gross square feet--we calculate that total O&M for the new plaza and buildings could range from $6 million to $11 million annually, in current dollars. The plaza and buildings project is in the early stages of the planning phase, and many factors could affect the actual O&M costs. Some of the factors that will affect O&M costs are within the control of the Kennedy Center, and others are not. Examples of factors within the Kennedy Center’s control that may affect O&M costs are as follows: The current plaza proposal includes a large fountain located above a roadway that connects the Kennedy Center to the National Mall. Fountains are expensive to maintain, and locating the fountain above a roadway could present additional security risks from the traffic below, which may increase security costs. The size of the plaza and buildings has not yet been finalized. A reduction or increase in the size of the plaza and buildings would have a direct affect on O&M costs. The selection of building materials, such as the current plan to use a large amount of glass on the outside of the buildings, will affect O&M costs. Buildings with a large amount of glass on the outside are more expensive to cool due to the heat that is absorbed by the glass. In addition, glass is more expensive to clean than other materials, such as brick. The plaza and buildings project is not expected to be occupied until 2013, and economic factors that are not within the Kennedy Center’s control may affect actual O&M costs. For example, utility and labor rates have generally increased at a higher rate than the rate of inflation. It is difficult to anticipate these rates so far in the future. Furthermore, the actual O&M costs may not be known until the buildings are in operation for at least one annual cycle of using the plaza and for heating and cooling the new buildings. At the end of that cycle, the Kennedy Center’s appropriations request should be based on the actual O&M costs it incurred. As discussed earlier in this report, Congress currently funds Kennedy Center capital improvement projects not related to performances. Given the current precedent of providing funding for capital improvement projects at the Center, Congress may also be expected to provide additional funds in the future for capital improvement costs associated with the plaza project. The requirement to develop and annually update a Comprehensive Building Plan was intended to help improve management of the Kennedy Center’s capital projects and, over the long term, help reduce the public costs of operating and maintaining the facility. The current plan anticipated that projects addressing life safety and accessibility issues—needed to meet current codes—would be completed by the end of fiscal year 2008. However, it is unlikely the Kennedy Center’s Building Plan will be fully implemented by 2008, including life safety projects in some areas. This is due, in part, to changes in the sequence of its planned projects. Furthermore, the current Building Plan has not been updated since December 2002, and it does not provide individual project budgets or prioritize capital projects; thus, it is unclear which projects might be delayed or not implemented due to budget constraints. It is also not possible to determine from the Building Plan if individual projects are completed within project budget estimates. Including this information in its annual Building Plan, as well as the progress that has been made in renovating the Center and in meeting life safety and accessibility codes, would make the Kennedy Center’s use of federal funds to carry out its capital renovations more transparent and make the Kennedy Center more accountable for the use of federal funds. This information could also help the Kennedy Center support its request for federal funding and communicate more clearly the potential impact of federal funding decisions on the day-to-day operations of the Kennedy Center facility. To help congressional decision makers oversee the capital projects at the Kennedy Center and make funding decisions, we recommend that the President of the Kennedy Center, in conjunction with the Chairman of the Board of Trustees, annually update the comprehensive building plan, as required, and include (1) the prioritization of projects, (2) project status, and (3) updated budget information for planned and ongoing projects. We provided a draft copy of this report to the President of the Kennedy Center. On August 18, 2004, the Kennedy Center President provided us with written comments on behalf of the trustees and staff (see app. III). The President agreed with our recommendation and stated that Kennedy Center staff will plan to implement it immediately. Kennedy Center officials also provided technical comments that have been incorporated throughout the report, as appropriate. The letter also emphasized that the Comprehensive Building Plan is primarily a management tool and that other reports are the vehicles for keeping Congress informed of the Center’s progress in its renovation program. As part of our work, we reviewed the reports to the Operations Committee and a monthly report to OMB. While some of the information we are recommending be included in the Comprehensive Building Plan is provided in these reports, other recommended project information is not. For example, neither report provides information on project prioritization or projects planned for the future. The project information that is provided is not presented in a format that allows stakeholders to easily track the overall progress of the Kennedy Center renovations or specific capital projects from the building plan. It also appears that this information is not being conveyed to all congressional stakeholders. We believe that having project specific information available in one document that is provided to the stakeholders annually, as described in our recommendation, will help congressional decision makers and other stakeholders oversee the capital projects at the Kennedy Center. In addition, the letter did not agree with how we counted the projects identified in the Comprehensive Building Plan. However, it stated that the Comprehensive Building Plan has not been consistent in how it has identified projects and that the most recent plan lists design and implementation as two separate projects, while the earlier plans listed them as one project. As we noted in our report, the Comprehensive Building Plan does not clearly explain how projects are restructured or reported in different updates. Since our objective was to compare the actual projects undertaken with those reported in the Comprehensive Building Plan, we have identified and numbered the projects as listed in the plan and its updates to the plan. We understand the Kennedy Center’s concern that how the projects are counted can change the reported percentage of projects completed. Thus, we have deleted the reference to the percentage of projects completed as an indication of the Kennedy Center’s progress in implementing the Comprehensive Building Plan and only report actual numbers of projects. We believe that if the Kennedy Center implements our recommendation and provides more clear project information, this type of analysis should be possible in the future. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of the Kennedy Center Board of Trustees, and the President of the Kennedy Center. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions, please contact me on (202) 512-2834 or at [email protected]. See appendix IV for a list of the major contributors to this report. To determine the amount of federal appropriations the John F. Kennedy Center for the Performing Arts (Center) requested and received for capital projects for fiscal years 1995 to 2004, we reviewed the Kennedy Center’s annual budget justification to Congress and Comprehensive Building Plan and its updates; federal authorization and appropriation laws; and the Kennedy Center’s audited financial statements. The Kennedy Center’s budget justifications to Congress contain the Kennedy Center’s request for federal funding for its capital repairs and rehabilitation. We compared the amount the Kennedy Center requested with the actual appropriations the Kennedy Center received from fiscal year 1995 through 2004, taking into account rescissions to budget authority. We also compared the amounts of the appropriations identified in the law with the amounts identified in the Kennedy Center’s audited financial statements and verified rescission amounts with Kennedy Center officials. Finally, we relied on the Kennedy Center’s 2002 Comprehensive Building Plan to determine the amount of federal appropriations the Center expects to request through fiscal year 2008. We also reviewed the budget justifications and Building Plans to determine why changes were made to the Kennedy Center’s funding request. To determine the status of the Kennedy Center’s Comprehensive Building Plan and its updates, we reviewed the initial 1995 Comprehensive Building Plan and subsequent updates to the plan. We developed a list of capital projects the Kennedy Center reported it was planning to complete. We discussed with Kennedy Center officials the project status and cost information for these projects and other projects that it had completed with federal funds appropriated for capital projects since fiscal year 1995. We reviewed the Kennedy Center’s annual financial statements and supporting capital project schedules. We determined that the information provided for project costs appeared reasonable based on the annual capital expenditures and capital projects in progress listed in the financial statements. We toured the Kennedy Center and visually saw many of the projects listed in the Building Plan. We also interviewed the Kennedy Center’s external auditor to determine what testing was performed on internal controls over federal expenditures. Thus, we determined that project status and cost data were sufficiently reliable for the purpose of our review. Finally, we compared the information the Kennedy Center provided on the status of the capital projects in the Building Plan and its updates and capital appropriations received since fiscal year 1995 to evaluate the likelihood that the plan would be fully implemented as planned by the end of fiscal year 2008. We did not evaluate whether individual capital projects were completed within their original budgets or on schedule. To determine the potential impact of the Center’s proposed plaza project on the need for future federal funds, we calculated the potential operations and maintenance (O&M) costs based on data from a survey of museum facility management practices and the Kennedy Center’s projected O&M rates. Kennedy Center officials provided data for five categories of O&M expenses for six Washington, D.C., museums based on a 2002 museum benchmarking survey of facility management practices conducted by Facility Management Services Ltd. Since one of the buildings in the proposed plaza project will contain space that the Center officials said would be maintained at the same level as a museum, we agreed that it was appropriate to use museums in Washington, D.C., to estimate the potential O&M costs. We estimated the potential O&M rate for the proposed plaza project by averaging the aggregate O&M costs for the six museums. We used the average O&M costs to develop our estimate because the proposed plaza project is still in the early planning phase and many factors, including the size of the buildings, could change before designs are finalized. The Kennedy Center officials’ projections were based on its current O&M costs and the O&M cost data for six museums in Washington, D.C., described above. Kennedy Center officials believe that the new buildings will cost at least as much to maintain and operate per square foot as the current Center. They could not identify any category of O&M expenses they believed would be less expensive to operate than the current building. The Center officials said they disregarded the information from the museums with the highest and lowest O&M costs and projected O&M costs on the basis of data from the other four museums. According to the officials, they adjusted the rates for the different categories that make up the O&M costs, based on a number of factors, such as the size of the other museums relative to the proposed new plaza project. For example, the Center officials said they increased the expected cost of grounds maintenance over the other museums because the proposed plaza project includes a large fountain, which will be expensive to maintain. In addition, because the plaza will be suspended over a roadway, all of the plants will have to be in containers, which will also increase operations and maintenance costs. Since there are a wide variety of types of proposed space in the buildings, ranging from museum space to rehearsal rooms, the Kennedy Center officials said they averaged rates based on the different types of space. We independently researched building industry groups’ rates, including the Building Owners and Managers Association and the International Facility Management Association, but did not identify any O&M rates that would have been appropriate to use for the type of space planned for the proposed plaza buildings. We discussed the methodology used in conducting the museum survey with Facility Management Services Ltd and discussed the measures incorporated into the survey to maximize accuracy of the data. We determined that the data on O&M costs for the six Washington, D.C., area museums is reliable for purposes of this report and that the Kennedy Center’s estimated rates were rational given the currently available information. We conducted our work from January 2004 through July 2004 in accordance with generally accepted government auditing standards. The following tables show the cost or expected cost of the Kennedy Center’s capital projects as identified in its Comprehensive Building Plan and its updates. The tables show those projects that are completed, ongoing, or planned for future years, and those that have not been implemented or have been postponed. In addition to the individual named above, Omar Beyah, Maria Edelstein, Brandon Haller, Nancy Lueke, Julie Phillips, and Susan Michal-Smith made significant contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | Since fiscal year 1995, the John F. Kennedy Center for the Performing Arts (Center) has been responsible and received federal funding for implementing capital improvement projects and operations and maintenance activities. The Kennedy Center's Comprehensive Building Plan identifies capital projects needed to renovate the Center and bring it into compliance with current life safety and accessibility codes. The Kennedy Center currently is planning to construct, with private funds, two new buildings to open in 2013 on a new plaza to be built adjacent to the existing facility. The Kennedy Center expects federal funding to operate and maintain these buildings. GAO was asked to examine (1) how much the Center has received in federal appropriations for capital projects, (2) the status of the Comprehensive Building Plan and updates, and (3) the potential impact of the Center's plaza project on the need for future operations and maintenance funding. For fiscal years 1995 through 2004, the Kennedy Center received approximately $152 million in federal appropriations for capital projects identified in its Comprehensive Building Plan. According to its fiscal year 2002 Comprehensive Building Plan, the Kennedy Center will need an additional $57 million in federal appropriations from fiscal year 2005 through fiscal year 2008 to complete its planned capital projects. The Kennedy Center has completed many of the capital projects identified in the Comprehensive Building Plan and has many more ongoing. However, we do not expect the Center to be able to complete all of the capital projects identified in the plan by fiscal year 2008. This is in part because the Kennedy Center reprioritized the sequence of its planned projects to minimize disruptions to its patrons. Several of the projects that will likely not be completed include components, such as the installation of sprinklers, that are important in meeting the Kennedy Center's goal of bringing the facility into compliance with current life safety codes. However, the Comprehensive Building Plan does not discuss changes to its prioritization of projects or the impact of those changes on completing its planned renovations. In addition, the updates to the plan do not include information on the status of projects identified in earlier plans or provide budget information for individual projects. As a result, the Comprehensive Building Plan is of limited use for understanding the Kennedy Center's progress in completing its planned renovations. Operations and maintenance costs for the plaza project, including two new buildings, could range from $6 million to $11 million annually, in current dollars, based on data from a survey of museum facility management practices and Kennedy Center data. The Kennedy Center expects to request additional annual federal appropriations for these costs. However, because the project is currently in the early planning phase, the operations and maintenance estimate could change as designs are finalized. In fiscal year 2004, the Center received about $16 million for the operations and maintenance costs of the existing facility. |
Since the creation of the SSN, the number of federal agencies and others that rely on it has grown beyond the original intended purpose, in part because a number of federal laws authorize or require SSN use. Additionally, the advent of computerized records further increased reliance on SSNs. This growth in use and availability of SSNs is important because SSNs are often the “identifier” of choice among thieves who steal another individual’s identity. Although no single federal law regulates overall use and disclosure of SSNs by governments, when federal government agencies use SSNs, several federal laws limit the use and disclosure of the number in certain circumstances. Also, state laws may vary in terms of the restrictions imposed on SSN use and disclosure. Moreover, some records that contain SSNs are considered part of the public record and, as such, are routinely made available to the public for review. SSA is the federal agency responsible for issuing SSNs, which are used to track workers’ earnings and eligibility for Social Security benefits. Legislation enacted in 1935 created the SSA and made the agency responsible for implementing a social insurance program designed to pay benefits to retired workers to ensure a continuing portion of income after retirement. The amount of these benefits was based, in part, on the amount of the workers’ earnings. As a result, SSA needed a system to keep track of earnings by individual worker and for employers to report these earnings. In 1936, SSA created a numbering system designed to provide a unique identifier, the SSN, to each individual. Workers are now required by law to provide SSA their number when they apply for benefits from SSA. As of December 1998, SSA had issued 391 million SSNs. Since the creation of the SSN, other entities in both the private and public sectors have begun using SSNs, in part because of federal requirements. Widespread SSN use in government began with a 1943 Executive Order issued by President Franklin D. Roosevelt requiring that all federal agencies use the SSN exclusively when agencies need to use identification systems for individuals, rather than set up a new identification system. In later years, the number of federal agencies and others relying on the SSN as a primary identifier escalated dramatically, in part, because a number of federal laws were passed that authorized or required its use for specific activities as shown in table 1. In many instances, the laws required that SSNs be used to determine individuals’ eligibility for certain federally funded program services or benefits, or they served as a unique identifier for such government-related activities as paying taxes or reporting wages earned. In some cases these statutes require that state and local governmental entities collect SSNs. Private businesses, such as financial institutions and health care service providers, also frequently ask individuals for their SSNs. In some cases, they require the SSN to comply with federal laws but at other times, these businesses routinely choose to use the SSNs to conduct business. SSNs are a key piece of identification in building credit bureau databases, extracting or retrieving data from consumers’ credit histories, and preventing fraud. Businesses routinely report consumers’ financial transactions, such as charges, loans, and credit repayments to credit bureaus. A representative for the credit bureaus estimated that 80 percent of these transactions include SSNs. Although the representative reported that credit bureaus use other identifiers, such as names and addresses, to build and maintain individuals’ credit histories, credit bureaus view the SSN as one of the most important identifiers for ensuring that correct information is associated with the right individual because the SSN does not change as would a name or address. The credit bureaus’ representative told us that without the SSN, or a similar stable identifier, such as a biometric identifier, credit bureaus could still conduct business but the level of accuracy of individuals’ credit records would be greatly reduced. The fundamental goal of credit bureaus is ensuring that the credit information provided to those who grant consumers credit is accurate. The less accurate the information, the less value that information is to those who grant credit. The credit bureaus’ representative told us that until other stable identifiers like biometrics gain widespread use, credit bureaus view the SSN as the key tool for ensuring the accuracy of consumer credit histories. The advent of computerized record keeping has implications for the availability of SSNs and other sensitive data. Government entities are beginning to make their records electronically available over the Internet. Moreover, the Government Paperwork Elimination Act of 1998 requires that, where practicable, federal agencies provide by 2003 for the option of the electronic maintenance, submission, or disclosure of information. State government agencies have also initiated Web sites to address electronic government initiatives. Moreover, continuing advances in computer technology and the ready availability of computerized data have spurred the growth of new business activities that involve the compilation of vast amounts of personal information about members of the public, including SSNs, that businesses sell. This growth in the use of SSNs is important to individual SSN holders because these numbers, along with names and birth certificates, are among the three personal identifiers most often sought by identity thieves.Identity theft is a crime that can affect all Americans. It occurs when an individual steals another individual’s personal identifying information and uses it fraudulently. For example, SSNs and other personal information are used to fraudulently obtain credit cards, open utility accounts, access existing financial accounts, commit bank fraud, file false tax returns, and falsely obtain employment and government benefits. SSNs play an important role in identity theft because they are used as breeder information to create additional false identification documents, such as drivers’ licenses. Most often, identity thieves use SSNs belonging to real people rather than making one up; however, on the basis of a review of identity theft reports, victims usually (75 percent of the time) did not know where or how the thieves got their personal information. In the 25 percent of the time when the source was known, the personal information, including SSNs, usually was obtained illegally. In these cases, identity thieves most often gained access to this personal information by taking advantage of an existing relationship with the victim. The next most common means of gaining access were by stealing information from purses, wallets, or the mail. In addition, individuals can also obtain SSNs from their workplace and use them or sell them to others. Finally, SSNs and other identifying information can be obtained legally through Internet sites maintained by both the public and private sectors and from records routinely made available to the public by government entities and courts. Because the sources of identity theft cannot be more accurately pinpointed, it is not possible at this time to determine whether SSNs that are used improperly are obtained most frequently from the private sector or the government. Recent statistics collected by federal and consumer reporting agencies indicate that the incidence of identity theft appears to be growing. The Federal Trade Commission (FTC), the agency responsible for tracking identity theft, reports that complaint calls from possible victims of identity theft grew from about 445 calls per week in November 1999, when it began collecting this information, to about 3,000 calls per week by December 2001. However, FTC noted that this increase in calls might also, in part, reflect enhanced consumer awareness. In addition, SSA’s Office of the Inspector General, which operates a fraud hotline, reports that allegations of SSN misuse increased from about 11,000 in fiscal year 1998 to more than 65,200 in fiscal year 2001. Additionally, SSA reported that almost 39,000 other allegations of program fraud also include an element of SSN misuse during fiscal year 2001. Most of these allegations relate to identity theft. However, some of the reported increase may be a result of a growth in the number of staff SSA assigned to field calls to the Fraud Hotline during this period. SSA staff increased from 11 to over 50 during this period, which allowed personnel to answer more calls. Also, officials from two of the three national consumer reporting agencies report an increase in the number of 7 year fraud alerts placed on consumer credit files, which they consider to be reliable indicators of the incidence of identity theft.Finally, it is difficult to determine how many individuals are prosecuted for identity theft because law enforcement entities report that identity theft is almost always a component of other crimes, such as bank fraud or credit card fraud, and may be prosecuted under the statutes covering those crimes. No single federal law regulates the overall use or restricts the disclosure of SSNs by governments; however, a number of laws limit SSN use in specific circumstances. Generally, the federal government’s overall use and disclosure of SSNs are restricted under the Freedom of Information Act (FOIA) and the Privacy Act. Broadly speaking, the purpose of the Privacy Act is to balance the government’s need to maintain information about individuals with the rights of individuals to be protected against unwarranted invasions of their privacy by federal agencies. Also, the Social Security Act Amendments of 1990 also provide some limits on disclosure, and these limits apply to state and local governments as well. In addition, a number of federal statutes impose certain restrictions on SSN use and disclosure for specific programs or activities. At the state and county level, each state may have its own statutes addressing the public’s access to government records and privacy matters; therefore, states may vary in terms of the restrictions they impose on SSN use and disclosure. Table 2 shows key laws that may affect SSN disclosure at the federal, state, and county level. For more information on the specific provisions in the federal laws, including a summary of the privacy principles that underlie the Privacy Act, see appendix II. In addition, a number of laws provide protection for sensitive information, such as SSNs, when maintained in computer systems and other government records. Most recently, the Government Information Security Reform provisions of the Fiscal Year 2001 Defense Authorization Act require that federal agencies take specific measures to safeguard computer systems that may contain SSNs. For example, federal agencies must develop agency-wide information security management programs, establish security plans for computer systems, and conduct information security awareness training for employees. These laws do not apply to state and local governments; however, in some cases state and local governments have developed their own statutes or put requirements in place to similarly safeguard sensitive information, including SSNs, kept in their computer systems. In some cases, government entities, particularly at the state and county levels, maintain public records that are routinely made available to the public for inspection. For state and county executive branch agencies, state law generally governs whether and under what circumstances these records are made available to the public, and they vary from state-to-state. Records may be made available for a number of reasons. These include the presumption that citizens need government information to assist in oversight and ensure that government is accountable to the people. In addition, some government agencies, such as county clerks or recorders, exist primarily to create or maintain records to assist the public and private sector in the conduct of business, legal, or personal affairs. These records may contain SSNs. Certain records maintained by the federal, state, and county courts are also made available to the public. In principle, these records are open to aid in preserving the integrity of the judicial process and to enhance the public trust and confidence in the judicial process. Courts are generally not subject to FOIA or other open record laws. At the federal level, access to court documents generally has its grounding in common law and constitutional principles. In some cases, public access is also required by statute, as is the case for papers filed in a bankruptcy proceeding. As with federal courts, requirements regarding access to state and local court records may have a state common law or constitutional basis or may be based on state laws. Although states’ laws may vary, generally, custodians of court records must identify a statute, court rule, or a case law or common law basis to preclude public access to a particular record; otherwise the record is presumed to be accessible to the public and must be disclosed to the public upon request. SSNs are widely used by federal, state, and county government agencies when they provide services and benefits to the public. These agencies use SSNs both to manage their records and to facilitate data sharing with others. They share SSNs and other personal information to verify eligibility for benefits, collect debts owed the government, and conduct or support research and evaluation. In addition to using SSNs for program purposes, many of these agencies also reported using their employees’ SSNs for activities such as payroll, wage reporting, and providing employee benefits. As a result of this widespread SSN usage, these agencies occasionally display SSNs on documents that may be viewed by others who do not have a need for this personal information. Most of the agencies we surveyed at all levels of government reported using SSNs extensively to administer their programs. As shown in figure 1, more agencies reported using SSNs for internal administrative purposes, that is, they use them to identify, retrieve, and update their records, than for any other purpose. SSNs are so widely used for this purpose, in part, because each number is unique to an individual and does not change, unlike some other personal identifying information, such as names and addresses. For this reason, SSNs can provide a convenient and efficient means to manage records, particularly electronic records, that catalog services or benefits government agencies provide individuals or families. Many agencies also use SSNs to share information with other entities to bolster the integrity of the programs they administer. For example, individuals are often asked to report their income, citizenship status, and household composition to determine their eligibility for government benefits or services. To avoid paying benefits or providing services or loans to individuals who are not really eligible for them, agencies use applicants’ SSNs to match the information they provide with information in other data bases, such as other federal benefit paying agencies, state unemployment agencies, the Internal Revenue Service (IRS), or employers. As unique identifiers, SSNs help ensure that the agency is obtaining or matching information on the correct person. As shown in figure 1, the majority of agencies at all three levels of government reported sharing information containing SSNs for the purpose of verifying an applicant’s eligibility for services or benefits. These data- sharing activities can help save the government and taxpayers hundreds of millions of dollars. In some cases, the Congress has recognized the benefits of this data sharing for federally funded programs and has either explicitly permitted or required agencies to share data for these purposes. Examples of SSN use for verifying and monitoring eligibility include the following: Individuals confined to a correctional facility for at least 1 full month are ineligible to continue receiving federal Supplemental Security Income (SSI) program benefits. SSA, the federal agency that administers this program, uses SSNs to match records with state and local correctional facilities to identify individuals for whom the agency should terminate benefit payments. We reported that between January and August 1996, the sharing of prisoner data between SSA and state and local correctional facilities helped SSA identify about $151 million overpayments already made and prevented about $173 million in additional overpayments to ineligible prisoners. When individuals apply for Temporary Assistance for Needy Families (TANF), a program designed to help low-income families, the law requires them to provide program administrators their SSNs and information about their income and resources. Some agencies that administer this program use SSNs to share data to determine the applicants’ and current recipients’ eligibility and to verify self-reported information. The state of New York alone estimated that by checking state wage data records, it saved about $72 million in unpaid benefits between January and September 1999. SSNs can also help ensure program integrity when they are used to collect delinquent debts, and some agencies at each level of government reported sharing data containing SSNs for this purpose. Individuals may owe such debts to government agencies when they fall behind in loan repayments, have underpaid taxes, or are found to have fraudulently received benefits. For example: The Department of Education uses SSNs to match data on defaulted education loans with the National Directory of New Hires. This database, which was implemented in October 1997, contains the names and SSNs, among other information, of individuals that employers reported hiring after implementation. As a result of this matching, which was implemented in fiscal year 2001, the department reported collecting $130 million from defaulted student loans borrowers in 2001. The Department of the Treasury, as the federal government’s lead agency for debt collection, also uses the SSN. For example, when an individual falls behind in payments owed the federal government, the agency owed the debt provides Treasury with the debtors’ SSN and debt information. Treasury then uses the SSN to determine whether individuals owe the federal government money before making certain payments, such as tax refunds. If Treasury finds the individual is delinquent in paying a debt to the government, the agency will offset certain payments due the individual to satisfy the debt. Using this approach, Treasury used tax refund offsets to collect over $1 billion in federal nontax debt in 2001. Certain statistical agencies, which are responsible for collecting and maintaining data for statistical programs that are required by statute, make use of SSNs. In some cases, these data are compiled using information provided for another purpose. For example, the Bureau of the Census prepares annual population estimates for states and counties using individual income tax return data linked over time by SSN to determine migration rates between localities. For its Survey of Income and Population Participation, the bureau asks survey participants for various demographic characteristics and types of incomes received. The bureau also asks participants to provide their SSNs, informing them that the SSNs will be used to obtain information from other government agencies to avoid asking for information already reported to the government. As is the case for all government information collections, OMB must approve the collection of data for such statistical and research purposes. In addition, SSNs along with other program data, are sometimes used for research and evaluation. SSNs provide government agencies and others with an effective mechanism for linking data on program participation with data from other sources to help evaluate the outcomes or effectiveness of government programs. This information can prove invaluable to program administrators as well as policymakers. As shown in table 3, more than one-third of federal, state, and county agencies combined reported using SSNs to conduct internal research or program evaluation, and almost one-fifth of state agencies provide data containing SSNs to outside researchers. Examples of SSN use for evaluation and research include the following: As one of its many uses, Census may match the Survey of Income and Population Participation responses with data contained in records for programs such as TANF, Supplemental Security Income, and food stamp programs. Linking various data by SSN helps policymakers assess the extent to which these federal programs together assist low- income individuals. Health departments may provide SSN information to outside researchers, including universities or foundations, or provide SSN information to other organizations such as the National Center for Health Statistics, which compile national data on subjects such as infant birth and mortality data. In addition to the above reasons for sharing data that focus primarily on program integrity and research, some agencies use SSNs as a means of sharing data to improve services. For example, in light of major changes to the nation’s welfare program in 1996, welfare agencies are focusing on moving needy families toward economic independence and are drawing on numerous federal and state programs to provide a wide array of services, such as child care, food stamps, and employment and training. Sharing data can help them identify what services beneficiaries have received and what additional services are available or needed. All government agencies that administer programs and share records containing individuals’ SSNs with other entities reported sharing SSNs with at least one other government agency. Aside from sharing with other government agencies, the largest percentage of federal and state program agencies report sharing SSNs with contractors, and a relatively large percentage of county program agencies report sharing with contractors as well, as shown in table 3. Agencies across all levels of government use contractors to help them fulfill their program responsibilities. Contractors most frequently determine eligibility for services, provide services, conduct data processing activities, and perform research and evaluation. In addition to sharing SSNs with contractors, government agencies also share SSNs with private businesses, such as credit bureaus and insurance companies, as well as debt collection agencies, researchers, and, to a lesser extent, with private investigators. All government personnel departments we surveyed reported using their employees’ SSNs to fulfill at least some of their responsibilities as employers. As with many of the program-related SSN uses described earlier, these employer uses involve data sharing among governments and other agencies. Personnel departments responding to our questionnaire said they use SSNs to help them maintain internal records and provide employee benefits. To provide these benefits, employers often share data on employees with other entities, such as health care providers or pension plan administrators. As an example, employers submit employees’ SSNs along with certain information about employees to health insurers and retirement plan administrators. Health insurers may use the SSNs to identify enrollment in health plans and verify eligibility for payments for health services. Retirement plan administrators use the SSN to record the contribution in the correct employee account, and when they make payments to individuals, they are required to report the payments using the individuals’ SSNs to the IRS. In addition, employers are required by law to use employees’ SSNs when reporting wages. Wages are reported to SSA, and the agency uses this information to update earnings records it maintains for each individual. These earnings ultimately determine eligibility for and the amount of Social Security benefits. After processing these reported wages, SSA provides the information to the IRS, which uses it to monitor individuals’ compliance with the federal personal income tax rules. The IRS uses SSNs to match these employer wage reports with amounts individuals report on personal income tax returns. Finally, federal law requires that states maintain employers’ reports of newly hired employees, identified by SSNs. States must forward this information to a national database that is used by state child support agencies to locate parents who are delinquent in child support payments. In the course of delivering their services or benefits, many government agencies occasionally display SSNs on documents that may be viewed by others, some of whom may not have a need for this personal information. Figure 2 shows a variety of ways SSNs are displayed, as reported in our survey by federal, state, and county personnel departments. When SSNs appear on payroll checks, rather than on the more easily safeguarded pay stub, any number of individuals can view the employee’s SSN depending on where the check is cashed. To receive services at government rates, government employees may be required to provide hotel employees and others documents such as travel orders or tax exemption forms that display their SSNs. Some federal agencies and a few state and county personnel departments reported displaying employees’ SSNs on their employee badges. Notably, the Department of Defense (DOD), which has over 2.7 million active and reserve military personnel, displays SSNs on its identification cards for these personnel. According to DOD officials, the Geneva Convention suggests that military personnel have an identification number displayed on their identification card, and DOD has chosen to use the SSN for this purpose. On the state level, the Department of Criminal Justice in one state, which has about 40,000 employees, displays SSNs on all employee identification cards. According to that state’s Department of Criminal Justice officials, some of their employees have taken actions such as taping over their SSNs so that prison inmates and others cannot view this personal information. SSNs are also displayed on documents that are not employee-related. For example, some benefit programs display the SSN on the benefit checks and eligibility cards, and over one-third of federal respondents reported including the SSN on official letters mailed to participants. Further, some state institutions of higher education display students’ SSNs on identification cards. Finally, SSNs are sometimes displayed on business permits that must be posted in public view at an individual’s place of business. When agencies that deliver services and benefits use SSNs to administer programs, they are taking some steps to safeguard SSNs, but certain measures that could provide more assurances that these SSNs are secure are not universally in place at any level of government. First, when federal, state, and county agencies request SSNs, they are not consistently informing the SSN holders of whether they must provide the SSN to receive benefits or services and how the SSN will be used. In addition, although some agencies are using identifiers other than the SSNs in their records, most report it would be difficult to stop using SSNs. When agencies do use the SSN, we found weaknesses in their information systems security at all levels of government, which indicate SSNs may be at risk of improper disclosure. Finally, although some agencies are taking action to limit the display of SSNs on documents that are not intended to be public but may be viewed by others, these actions are sometimes taking place in a piecemeal manner rather than as a result of a systematic effort. When a government agency requests an individual’s SSN, the individual needs certain information to make an informed decision about whether to provide their SSN to the government agency or not. Accordingly, section 7 of the Privacy Act requires that any federal, state, or local government agency, when requesting an SSN from an individual, provide that individual with three key pieces of information. Government entities must tell individuals whether disclosing their SSNs is mandatory or voluntary, cite the statutory or other authority under which the request is being state what uses government will make of the individual’s SSN. This information, which helps the individual make an informed decision, is the first line of defense against improper use. Although nearly all government entities we surveyed collect and use SSNs for a variety of reasons, many of these entities reported they do not provide individuals the information required under section 7 of the Privacy Act when requesting their SSNs. As shown in table 4, federal agencies were more likely to report that they provided the required information to individuals when requesting their SSNs than were states or local government agencies. Even so, federal agencies did not consistently provide this required information; 32 percent reported that they did not inform individuals of the statutory authority for requesting the SSN and 21 percent of federal agencies reported that they did not inform individuals of how their SSNs would be used. For federal agencies, OMB is responsible for assisting with and overseeing the implementation of the Privacy Act. Although OMB has issued guidance for federal agencies to follow in implementing the act overall, OMB’s guidance does not address section 7. However, there is another provision of the act that contains requirements similar to those of section 7, and OMB guidance does address this provision. This provision requires agencies to inform individuals from whom they request information (1) the legal authority that authorizes the collection and whether disclosure is voluntary or mandatory, (2) the purposes for which the information is intended to be used, (3) the routine uses to be made of the information, and (4) the effects on the individual of not providing all or any part of the information. Agencies must provide this information on the forms they use to collect the information or on a separate form that can be retained by the individual. However, this provision differs from section 7 in important ways. It applies only to federal agencies that maintain a system of records, as defined under the act, whereas section 7 applies to all agencies at the federal, state, and local level and contains no provision limiting its coverage to agencies maintaining a system of records. Regarding how OMB oversees implementation of the Privacy Act, OMB officials told us that they review certain federal agency actions related to the Privacy Act, such as notices placed in the federal register to inform the public of changes to agency systems of records; however it is not their role to monitor day-to-day federal agency compliance with the many provisions of the act. For this ongoing compliance monitoring, OMB officials said that they rely on agency privacy officers, general counsels, and inspector generals. In addition, under the Act, individuals can bring a civil action against a federal agency requesting the SSN if they believe that the agency has not complied with the section 7 requirements and if this failure to comply results in an adverse effect on the individual. At the state and county levels of government, it is not clear who has responsibility for overseeing the section 7 requirements placed on state and local governments. In fact, some state and local officials we spoke with were unaware of the requirements. Moreover, OMB officials told us that they have not issued any implementing regulations or guidance for section 7 for state and county government agencies, and no federal agency has assumed overall responsibility for monitoring these agencies and informing them of their obligations under section 7 of the Privacy Act.According to OMB officials, their role with respect to state and local governments is limited to advising state and county officials who raise questions about the act. In addition, OMB officials also work with the National Association of State Chief Information Officers and other organizations to discuss and share ideas on information management issues. Further, unlike the federal government, courts have disagreed on whether individuals have a right of civil action against state and county governments when these individuals believe state or county agencies are not complying with section 7 of the Privacy Act. For example, a Ninth Circuit Court of Appeals decision held that individuals do not have a right of action against state and local governments for violating the Privacy Act. Conversely, other courts have recognized implied remedies against state governments for violations of the act. For example, in Louisiana, a district court ordered that the state stop asking for SSNs as a prerequisite to voter registration, based partially on the court’s determination that the Louisiana commissioner of elections was violating section 7 of the act.Similarly, a district court found that Virginia violated the act when collecting SSNs for voter registration because it did not provide required notice when requesting individuals’ SSNs. When government agencies collect SSNs that are not part of public records, they have a number of options available to them to limit the risk of improper disclosure. These agencies can use numbers other than SSNs for some program activities; implement a number of controls to ensure that when they use SSNs, they are properly safeguarded; and limit the use of SSNs on documents that may be viewed by others who do not have a need to access this personal information. Despite the widespread use of SSNs at all levels of government, not all agencies use the SSN. Some respondents (19 from state departments and 33 from county departments) reported that they do not obtain, receive, or use the SSNs of program participants, service recipients, or individual members of the public. Moreover, of those who do use the SSN, not all use it as their primary identification number for record-keeping purposes. Of federal respondents, 65 percent use SSN as their primary identifier, while 50 percent of state and 38 percent of county agencies reported doing so. In addition, when agencies do use the SSN as their primary identification number, some agencies also maintain an alternative number that is used in addition to or in lieu of SSNs for certain activities. In fact, at least one- fourth of the respondents across all levels of government said they used SSNs as the primary identifier and also assigned alternative identifiers (38 federal, 30 state, and 25 percent county). There are a number of reasons why agencies use identification numbers other than SSNs. Officials from two county health departments told us that they do not require applicants for the Women, Infant, and Children Program to provide their SSNs because eligibility is determined based on client-provided information.Under these circumstances, program administrators do not need to use SSNs to match data to verify program eligibility. Two officials said that their county health departments use numbers the departments assign as the primary identifier. In such cases, however, health care providers may use SSNs to track patients’ medical care across multiple providers or to coordinate benefit payments. Finally, law enforcement agencies we met with are less likely to consider SSNs as their primary identification number because criminals often have multiple or stolen identities and SSNs. We asked those agencies that used SSNs as their primary identifier and did not use alternate identification numbers how difficult it would be to change their procedures to permit using different identification numbers in place of SSNs. More than 85 percent of agencies in this category at all levels of government reported that it would be somewhat or very difficult to make this change (93 percent of federal agencies, 93 percent of state agencies, and 87 percent of county agencies). The top four reported reasons why programs might have difficulty making these changes, were (1) that it would prevent interfacing with the computer systems of other departments or programs that use SSNs, (2) it would be too costly, (3) the program’s current software would not support the change, and (4) it would require a change in law. When government agencies collect and use SSNs as an essential component of their operations, they need to take steps to mitigate the risk of individuals gaining unauthorized access to SSNs or making improper disclosure or use of SSNs. As discussed earlier in this report, agencies at all levels of government use SSNs extensively for a wide range of purposes. Further, they store and use SSNs in varied formats. Over 90 percent of our survey respondents reported using both hard copy and electronic records containing SSNs when conducting their program activities. When using electronic media, many employ personal computers linked to computer networks to store and process the information they collect. This extensive use of SSNs, as well as the various ways in which SSNs are stored and accessed or shared, increase the risks to individuals’ privacy and make it both important and challenging for agencies to take steps to safeguard these SSNs. Uniform guidelines that cut across all levels of government do not exist to specify what actions governments should take to safeguard personal information that includes SSNs. However, certain federal laws lay out a framework for federal agencies to follow when establishing information security programs to protect sensitive personal information, such as SSNs. The federal framework is consistent with strategies used by those private and public organizations that we previously reported have strong information security programs. The federal framework includes four principles that are important to an overall information security program. These are to periodically assess risk, implement policies and controls to mitigate risks, promote awareness of risks for information security, and continually monitor and evaluate information security practices. To gain a better understanding of whether agencies had in place measures to safeguard SSNs that are consistent with the federal framework, we selected eight commonly used practices found in information security programs—two for each principle. Use of these eight practices could give an indication that an agency has an information security program that follows the federal framework. We surveyed the federal, state, and county programs and agencies on their use of the following eight practices: Conduct risk assessments for computer systems that contain SSNs Develop written security plan for computer systems that contain SSNs Implement policies and controls to mitigate risks Develop written policies for handling records with SSNs Control access to computerized records that contain SSNs, such as assigning different levels of access and using methods to identify employees (e.g., use ID cards, PINS, or passwords) Promote awareness of risks for information security Provide employees training or written materials on responsibilities for Take disciplinary actions against employees for noncompliance with policies, such as placing employees on probation, terminating employment, or referring to law enforcement Continually monitor and evaluate information security practices Monitor employees’ access to computerized records with SSNs, such as tracking browsing and unusual transactions Have computer systems independently audited Responses to our survey indicate that agencies that administer programs at all levels of government are taking some steps to safeguard SSNs; however, potential weaknesses exist at all levels. Many survey respondents reported adopting some of the practices; however, none of the eight practices were uniformly adopted at any level of government. Of the eight practices, the largest percentage of agencies at all three levels of government combined reported controlling access to computerized records that contain SSNs and taking disciplinary actions against employees for noncompliance with policies. The smallest percentage of agencies at all three levels of government combined reported developing written policies for handling records with SSNs and having their information systems security independently audited. Overall, opportunities exist at all levels of government to increase protections against improper access, disclosure, or use of personal information, including SSNs. In general, when compared to state and county government agencies, a higher percentage of federal agencies reported using most of the eight practices. It is important to note that since 1996 we have consistently identified significant information security weaknesses across the federal government. In early 2002, based on a review of 24 of the largest federal agencies, we reported that federal agencies had not established information security programs consistent with legislative requirements.We found that significant information security weaknesses continued to exist in all major areas for information security programs. For example, (1) risk assessments had not been conducted for all computer systems, (2) polices may have been inadequate or excessive because risks had not been adequately assessed, (3) employees may have been unaware of their security responsibilities because agencies provided little or no training, and (4) effectiveness of security practices was unknown because of inadequate testing and evaluation of security controls. Further, in its February 2001 report to the Congress, OMB noted that many federal agencies have significant deficiencies in every important area of security.Although information security weaknesses may have been reported for certain states and counties, we are not aware of a comparable, comprehensive assessment of information security for either state or county government. Further, when SSNs are passed from a government agency to another entity, agencies need to take additional steps to continue protections for sensitive personal information that includes SSNs, such as imposing restrictions on the entities to help ensure that the SSNs are safeguarded. OMB guidance specifies a number of requirements federal agencies must follow for certain sharing of personal information. For example, the guidance specifies that federal agencies should prohibit recipient agencies from redisclosing data, except as allowed by law; employ effective security controls; and include mechanisms to hold recipients of data accountable for compliance. The guidance does not prescribe specific steps agencies should take when sharing information containing SSNs and other personal information. Moreover, although state and county governments may establish their own requirements, these would apply only to their respective jurisdiction. In the absence of uniform prescribed steps agencies should take when sharing data, we surveyed agencies on whether they implemented selected requirements when sharing information containing SSNs with outside entities. As shown in table 5, agency responses indicate that, although most include security requirements in contracts or data sharing agreements, many did not have a process in place to ensure compliance. Most agencies reported requiring those receiving personal data to restrict access to and disclosure of records containing SSNs to authorized persons and to keep records in secured locations. However, fewer agencies reported having provisions in place to oversee or enforce compliance. For example, only about half of the agencies at all levels of government combined reported using audits to monitor receivers’ compliance with requirements. As a result, there is little assurance that entities receiving SSNs from government agencies have upheld their obligation to protect the confidentiality and security of SSNs. Efforts are underway at the federal level to more closely review individual federal agencies’ security practices. At the direction of the President’s Council on Integrity and Efficiency, officials from 15 federal agencies’ offices of the inspector general are reviewing their respective agency practices in using and safeguarding SSNs. At the state and county levels, opportunities exist for associations that represent these jurisdictions nationwide to conduct educational programs to highlight the importance of safeguarding SSNs, encourage agencies to strengthen how they safeguard SSNs, and develop recommended policies and practices for safeguarding SSNs. We identified a number of instances where the Congress or governmental entities have taken or are considering action to reduce the presence of SSNs on documents that may be viewed by others who may not have a need to view this personal information. Examples of recent efforts to reduce display follow. Treasury relocated the placement of SSNs on Treasury checks to a location that cannot be viewed through the envelope window. The Defense Commissary Agency stopped requiring SSNs on checks written by members because of concerns about improper use of the SSNs and identity theft. SSA has truncated individuals’ SSNs that appear on the approximately 120 million benefits statements it mails each year. At the top of this statement, SSA has included a notice warning individuals to protect their SSNs. A state comptroller’s office changed its procedures so that it now offers vendors the option of not displaying SSNs on their business permits. One state has a statute that prohibits display of SSNs on licenses issued by the state’s health department. Some states have passed laws prohibiting the use of SSNs as a student identification number. Almost all states have modified their policies on placing SSNs on state drivers’ licenses. Although it was common practice to find SSNs on licenses only a few years ago, today only ten states routinely display SSNs as a recognizable nine-digit number. It is important to note that these steps to limit the display of SSNs do not mean the agency has stopped collecting SSNs. In fact, in some cases, the agency may be required by law to collect the SSN but the number need not always be placed on a document or record that is seen by the public. Agencies are taking these actions even though it is not clear that the SSN displays we identified are, in fact, prohibited. Limitations on disclosing the SSN vary from use to use and among governmental entities. For example, on the federal level, the Privacy Act permits the disclosure of information in a record covered by the act if the agency can show that the use is compatible with the purpose for which it was collected. At the state level, depending on the state and applicable state laws, information about public employees may be considered public information and available upon request. Nonetheless, the efforts to reduce display suggest a growing awareness that SSNs are private information, and the risk to the individual of placing an SSN on a document that others can see may be greater than the benefit to the agency of using the SSN in this manner. However, despite this growing awareness and the actions cited above, many government agencies continue to display SSNs on a variety of documents that can be seen by others. In addition to the above actions taken by agencies at different levels of government, several bills have been introduced in the Congress that propose to more broadly limit or restrict the display of SSNs by all government entities. For example, some specifically prohibit SSN display on benefit checks or employee identity badges. Many of the respondents to our survey reported maintaining public records that contain SSNs. Many of these records are maintained by county clerks or recorders and certain state agencies. In addition, courts at all three levels of government maintain records that contain SSNs and are available to the public. Some of the documents in these records that contain SSNs are created by the governmental entity itself, while others are submitted by members of the public, attorneys, or financial institutions. The public has traditionally gained access to these public records by visiting the offices where they are maintained and requesting certain documents or by browsing among hard copies or microfilm to find the desired information. This has served, at least in part, as a practical deterrent to the widespread collection and use of others’ SSNs from public records. However, the growth of electronic record keeping has enabled a few agencies to provide or even sell their data in bulk. Moreover, although few entities report making SSNs available on the Internet, several officials told us they are considering expanding the volume and type of public records available on their Web site. As shown in table 6, all of the federal courts and over two-thirds of the state and county courts, county recorders, and state licensing agencies that reported maintaining public records indicated that these records contained SSNs. In addition, some program agencies also reported maintaining public records that contain SSNs. (For more information on the types of federal programs and state and county agencies that reported maintaining public records, see app. III). County clerks or recorders (hereinafter referred to as recorders) and certain state agencies often maintain records that contain SSNs because these offices have traditionally been the repository for key information that, among other things, chronicles various life events and other activities of individuals as they interface with government. For example, they often maintain records on an individual’s birth, marriage, and death. They maintain documentation that an individual has been licensed to work in certain professions, such as medical, legal, and public accounting. In addition, they may maintain documentation on certain transactions, such as property ownership and title transfer. This is done, according to recorders we met with, to make ownership known and detect any liens on a parcel of land before making a purchase. SSNs appear in these public records for a number of reasons. They may already be a part of a document that is submitted to a recorder for official preservation. For example, military veterans are encouraged to file their discharge papers with their local recorder’s office to establish a readily available record of their military service, and these documents contain the SSN because that number is the individual’s military identification number. Also, documents that record financial transactions, such as tax liens and property settlements, contain SSNs to help identify the correct individual. In other cases, government officials are required by law to collect SSNs. For example, to aid in locating noncustodial parents who are delinquent in their child support payments, the federal Personal Responsibility and Work Opportunity Reconciliation Act of 1996 requires that states have laws in effect to collect SSNs on applications for marriage, professional, and occupational licenses. Moreover, some state laws allow government entities to collect SSNs on voter registries to help avoid duplicate registrations. Again, although the law requires public entities to collect the SSN as part of these activities, this does not necessarily mean that the SSNs always must be placed on the document that becomes part of the public record. Figure 3 shows the percentage of state and county entities that display SSNs on each of the types of public records listed. Courts at all three levels of government also collect and maintain records that are routinely made available to the public. Court records overall are presumed to be public; however, each court may have its own rules or practices governing the release of information. The rationale for making these records public is that keeping court activities open helps ensure that justice is administered fairly. In addition, the legal requirement that bankruptcy court documents remain open for public inspection is to ensure that bankruptcy proceedings take place in a public forum to best serve the rights of both creditors and debtors. As with recorders, SSNs appear in court documents for a variety of reasons. In many cases, SSNs are already a part of documents that are submitted by attorneys or individuals. These documents could be submitted as part of the evidence for a proceeding or could be included as part of a petition for an action, such as a judgment or a divorce. In other cases, courts include SSNs on documents they and other government officials create, such as criminal summonses, arrest warrants, and judgments, to increase the likelihood that the correct individual is affected (i.e., to avoid arresting the wrong John Smith). In some cases federal law requires that SSNs be placed in certain records that courts maintain. For example, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 requires that SSNs be placed in records that pertain to child support orders, divorce decrees, and paternity determinations. Again, this assists child support enforcement agencies in efforts to help parents collect money that is owed to them. These documents may also be maintained at county clerk or recorders’ offices. Figure 4 shows percentage of state and county entities that display SSNs on each of the types of public records listed. When federal, state, or county entities, including courts, maintain public records, they are generally prohibited from altering the formal documents. Officials told us that their primary responsibility is to preserve the integrity of the record rather than protecting the privacy of the individual named in the record. Officials told us they believe they have no choice but to accept the documents with the SSNs and fulfill the responsibility of their office by making them available to the general public. Traditionally, the public has been able to gain access to SSNs contained in public records by visiting the recorder’s office, state office, or court house; however, the requirement to visit a physical location and request or search for information on a case-by-case basis offers some measure of protection against the widespread collection and use of others’ SSNs from public records. Depending on the local practice, a member of the public may request specific documents from a clerk or may be able to browse through thousands of hard copies of documents, often dating back many decades or more. In addition, some counties make available documents that have been microfilmed or microfiched. Under these circumstances, it may be somewhat easier to find information on individuals; however, the information available would be limited to the type of record that is microfilmed (e.g., property settlement documents). In other words, the effort involved in obtaining documents by visiting local offices in effect helps insulate individuals from possible harm that could result from SSN misuse because of the time and effort required. A county recorder told us that the individuals willing to expend the time and effort to visit local offices to review public records generally have a business need to do so. However, this limited access to information in public records is not always the case. We found examples where members of the public can obtain easy access to larger volumes of documents containing SSNs. Some offices that maintain public records offer computer terminals set up where individuals can look up electronic files from a site-specific database. In one of the offices we visited, documents containing SSNs that are otherwise accessible to the public are also made available in bulk to certain groups. In one county we visited, title companies have an arrangement to scan court documents to add to their own databases before the documents are filed in the county recorder’s office. When comparing the sharing practices of courts, state licensing agencies, and county recorders to program agencies that collect and use SSNs, a higher percentage of county recorders reported sharing information containing SSNs with credit bureaus, researchers, debt collection agencies, private investigators, and marketing companies. When courts, state licensing agencies, or county recorders share public records containing SSNs, they do not restrict receivers’ use or disclosure of the data. Government offices may charge fees when providing copies of records in various formats that may contain SSNs and other personal information. More than 20 percent of county agencies and 25 percent of state agencies reported charging fees when providing SSNs to a contractor, researcher, individual, or other entity during the last 12 months. In most cases, the fees only covered costs for providing the information. However, 13 percent of the state respondents and 44 percent of the county respondents that charged fees reported making a profit from charging a fee. At the state level, the smallest profit reported from this sale of records over the last 12 months was $5,000, and the largest was $2,068,400. On the county level, the smallest profit reported over the same period was $200, and the largest was more than $2 million. The range in revenue may be partially explained by the fact that officials from these agencies may sell these records to individuals requesting one or a small number of documents, or they may sell these records in bulk. For example, one state sells its unclaimed property database, which often contains SSNs. Finally, few agencies reported that they place SSNs on their Internet sites; however, this practice may be growing. Of those agencies that reported having public records containing SSNs, only 3 percent of the state respondents and 9 percent of the county respondents reported that the public can access these documents on their Web site. In some cases, such as the federal courts, documents containing SSNs are available on the Internet only to paid subscribers. In other cases, large numbers of SSNs may be available to the general public. For example, one state’s Office of the Comptroller of Public Accounts displays SSNs of business owners on their public web site embedded in Vendor/Taxpayer Identification Numbers. Moreover, increasing numbers of departments are moving toward placing more information on the Internet. We spoke with several officials that described their goals for having records available electronically within the next few years. Providing this easy access of records potentially could increase the opportunity to obtain records that contain SSNs that otherwise would not have been obtained by visiting the government agency. When SSNs are found in public records, some government entities are trying to strike a new balance between their responsibility to allow the general public access to documents that have traditionally been made available for public review and an increased interest in protecting the privacy of individuals. This is possible primarily for those records the agency or court creates. In these cases, the government entity may still collect SSNs, which may be required by law or important for record- keeping purposes, but the number itself need not be displayed. For those records and documents submitted by others, it is more difficult to exclude the SSN unless the individual or business preparing the document omits it before submission. When government agencies create public documents or records, such as marriage licenses, some are trying new innovative approaches that protect SSNs from public display. Some agencies have developed alternative types of forms to keep SSNs and other personal information separate from the portion of a document that is accessible to the general public. In these cases, even if the government agency is required by law to record the SSN, the number does not always need to be displayed on the copy of the document that is made available to the public. Changing how the information is captured on the form can help solve the dilemma of many county recorders who, because they are the official record keepers of the county, are usually not allowed to alter an original document after it is officially filed in their office. For example, a county recorder told us that Virginia recently changed its three part marriage application and license form. Currently, only one copy of the form is routinely made available to the general public and that copy does not contain the SSN while the other two copies do contain the SSN. However, a county recorder told us that even this seemingly simple change in the format of a document can be challenging because, in some cases, the forms used for certain transactions are prescribed by the state. In addition to these efforts at recorders offices, courts at all three levels of government have made efforts to protect SSNs in documents that the general public can access through court clerk offices. For example, one state court offers the option of filing a separate form containing the SSN that is then kept separate from the part of the record that is available for public inspection. These solutions, however, are most effective when the recorder’s office, state agencies, and courts prepare the documents themselves. In those many instances where others file the documents, such as individuals, attorneys, or financial institutions, the receiving agency has less control over what is contained in the document and, in many cases, must accept it as submitted. Officials told us that, in these cases, educating the individuals who submit the documents for the record may be the most effective way to reduce the appearance of SSNs. Such educational efforts could begin with informing individuals who submit documents to these offices that, once submitted, anything in that document is open to the public for review. For example, one individual who submitted his military discharge papers to his county recorder’s office expressed concern about having done so after he found out that his document was available for anyone to review. Several officials suggested placing signs in offices where public records are maintained. Others suggested finding additional ways to notify the public of the nature of public records and the consequences of submitting documents with SSNs on them. In addition, financial institutions, title companies, and attorneys submit a large portion of the documents that become part of the public record in recorder’s offices and the courts. These entities could begin to consider whether SSNs are required on the documents they submit. It may be possible to limit the display of SSNs on some of these documents or, where SSNs are deemed necessary to help identify the subject of the documents, it may be possible to truncate the SSN to the last four digits. While the above options are available for public records created after an office institutes changes, fewer options exist to limit the availability of SSNs in records that have already been officially filed or created. One option is redacting or removing SSNs from documents before they are made available to the general public. In our fieldwork, we found instances where departments redact SSNs from copies of documents that are made available to the general public, but these tended to be situations where the volume of records and number of requests were minimal, such as in a small county. Most other officials told us redaction was not a practical alternative for public records their offices maintain. Although redaction would reduce the likelihood of SSNs being released to the general public, we were told it is time-consuming, labor intensive, difficult, and in some cases would require change in law. In documents filed by others outside of the office, SSNs do not appear in a uniform place and could appear many times throughout a document. In these cases, it is particularly labor- intensive and a lengthy process to find and redact SSNs. In addition, especially in large offices that receive hundreds of requests for general public documents per day, we were told redacting SSNs from each document before giving it to a member of the general public would require significant staff resources. In one large urban county, the district clerk’s office sells about 930,000 certified pages a year from family law cases. The district clerk estimates that it would cost his office an additional $1 million per year in staff time and related expenses to redact SSNs from all of those documents before they are made available to the general public. Moreover, redaction would be less effective in those offices where members of the general public can inspect and copy large numbers of documents without supervision from office staff. In these situations, officials told us that they could change their procedures for documents that they collect in the future, but it would be extremely difficult and expensive to redact SSNs on documents that have already been collected and filed. In several of these offices we visited, documents are available in hard copy, on microfilm, on microfiche, or in electronic format. Copies of thousands of documents, often dating back many decades or more, are kept in large rooms where anyone can browse through them. In addition, some counties have computer terminals set up where individuals can look up electronic files on their own. In these cases, the only way to prevent disclosure of SSNs would be to redact them from all of the past records, which officials told us would be extraordinarily costly and in some cases (e.g., on microfiche and electronically scanned documents) would be extremely difficult. Some of the bills currently before the Congress call for redacting SSNs from public records or otherwise ensuring that the public does not have access to the numbers. In some cases, the proposals would apply to all SSN displays originally occurring after 3 years from the date of their enactment. In other cases, the proposal calls for redacting all SSNs that are routinely placed in a consistent and predictable manner on a public record by the government entity, but it would not require redacting SSNs that are found in varying places throughout the record. To protect SSNs that the general public can access on the Internet, some courts and government agencies are examining their policies to decide whether SSNs should be made available on documents on their Web sites. In our fieldwork, we heard many discussions of this issue, which is particularly problematic for courts and recorders, who have a responsibility to make large volumes of documents accessible to the general public. On the one hand, officials told us placing their records on the Internet would simply facilitate the general public’s ability to access the information. Furthermore, officials expressed concern that placing documents on the Internet would remove the natural deterrent of having to travel to the courthouse or recorder’s office to obtain personal information on individuals. Again, we found examples where government entities are searching for ways to strike a balance. For example, the Judicial Conference of the United States recently released a statement on electronic case file availability and Internet use in federal courts. They recommended that documents in civil cases and bankruptcy cases should be made available electronically, but SSNs contained in the documents should be truncated to the last four digits. Also, we spoke to one county recorder’s office that had recently put many of its documents on their web site, but had decided not to include categories of documents that were known to contain SSNs. In addition, some states are taking action to limit the display of SSNs on the Internet. Laws in Arizona and Rhode Island prohibit the display of students’ SSNs on the Internet. Even though the incidence of SSNs on government Web sites is minimal right now, some officials told us they were considering or were in the process of making more documents available on the Internet. Without some kind of forethought about the inherent risk posed by making SSNs and other personal information available on the Internet, it is possible that SSNs will become increasingly available to the general public via the Internet. The examples of efforts to limit the disclosure of SSNs cited above stem from initiatives taken by certain offices within states or from state laws that restrict specific types of SSN uses. By their nature, these efforts are limited only to the specific offices or types of use. However, efforts to protect individuals’ privacy can be more far-reaching when the initiatives are statewide. For example, in April 2000, the governor of Washington signed an executive order intended to strengthen privacy protections for personal information held by state agencies on the citizens, as well as ensure that state agencies comply fully with state public disclosure and open government laws. Under Washington’s executive order, state agencies are required to protect personal information to the maximum extent possible by (1) minimizing the collection, retention, and release of personal information by the state,(2) prohibiting the unauthorized sale of citizens’ personal information by state government, and (3) making certain that businesses that contract with the state use personal information only for the contract purposes and cannot keep or sell the information for other purposes. A number of actions to limit SSN use and display resulted from this order. In response to the executive order, state agencies across Washington reviewed their forms and documents on which SSNs appeared and identified displays that were deemed unnecessary, that is, displays where the appearance of the SSN on the document was not deemed vital to the business of the agency. In these cases, agency officials removed the SSNs from the forms or documents. For example, the state Department of Natural Resources removed SSNs from employee performance evaluation notices and worklists, individual employee training profiles, and employee exit questionnaire forms. Officials told us that they have also discontinued requiring SSNs on leave requests, travel reimbursements, and training forms. The Washington Office of the Attorney General deleted SSNs from training and attendance forms, personnel questionnaires, employee separation forms, flexiplace work schedule forms, and others. In addition, the Washington Department of Labor and Industries separated information in personnel files that may be reviewed by supervisors from payroll documents. In addition, private information, such as SSNs, is being redacted from employee documents that can be viewed by others, and applicants for jobs in a county we visited are not required to provide their SSN until they are offered a job. Washington agencies also changed the format of certain public records to limit the disclosure of SSNs. For example, the SSN and other personal information are only included on the back of the marriage certificate form, which is not supposed to be copied or given to the general public. In certain Washington courts, SSNs and other personal information required in family law cases must be written on a separate form from the rest of the court document, and this form is then kept in a restricted access file. This means that the public does not have access to the information, and internal access is limited to judges, commissioners, other court personnel, and certain state administrative agencies that administer family law programs. Anyone else requesting access to these case records must petition the court and make a showing of good cause as to why access should be granted. Agencies for Washington state also reviewed and certified all contracts involving data sharing as having appropriate requirements to prevent and detect contractors’ unauthorized SSN use. In fact, we were told of one case where the Washington state Department of Licensing monitored a contractor’s compliance with maintaining the privacy of personal information by, in part, providing the contractor with certain easily identifiable information that other entities did not have. By tracing the flow if this information, officials discovered that the contractor had improperly disclosed personal information and terminated the contract. Minnesota is another example of a state where action on the state level, in this case in the form of a law, has made a difference in how SSNs are treated in public records. The Minnesota Government Data Practices Act, which predates the federal Privacy Act, regulates the handling of all government data that are created, collected, received, or released by a state entity, political subdivision, or statewide system, no matter what form the data are in, or how they are stored or used. Referred to as the nation’s first privacy act, Minnesota’s statute regulates what information can be collected, who can see or have copies of the information, and civil penalties for violation of the act. Minnesota uses a detailed approach to classifying data as not public. One statutory provision specifically classifies SSNs collected by state and local government agencies as not public. As a result of this law, individuals must be informed either orally or in writing of their privacy rights whenever the state collects sensitive information about them. In addition, individuals filing a civil court document can either put their personal information on a separate form or submit two copies of the document, only one of which contains SSNs. The information containing SSNs is then filed separately from the rest of the court document and is not open to the general public. Neither state tracked costs for making changes to better protect personal information, such as SSNs. Generally, state officials reported that the costs for implementing the initiative in Washington and carrying out the state statute in Minnesota are absorbed in the cost of the states’ overall operations. SSNs are widely used in all levels of government and play a central role in how government entities conduct their business. As unique identifiers, SSNs are used to help make record keeping more efficient and are most useful when government entities share information about individuals with others outside their organization. The various benefits from sharing data help ensure that government agencies fulfill their mission and meet their obligation to the taxpayer by, for example, making sure that the programs serve only those eligible for services. However, as governments enjoy the benefits from using SSNs, they are not consistently safeguarding this personal information. They are not consistently providing individuals with required information about how their numbers will be used, thus depriving SSN holders of the basis to make a fully informed decision about whether to provide their SSN. Nor do governments have in place uniform information systems security measures. This suggests that these numbers and other sensitive information are at risk for improper disclosure and that more can be done to implement practices to help protect them. Further, when government agencies display the SSN on documents, such as employee identification badges and benefit eligibility cards, that are viewed by others who may not have a need for this personal information, the agency displaying the SSN increases the risk that the number may be improperly obtained and misused. In some cases, the risk for misuse may outweigh any benefit of its display. Safeguarding SSNs in public records offers an even greater challenge because of the inherent tension between the nature of public records, that is, the need for transparency in government activities, and the need to protect individuals’ privacy. Plans to bring public records on-line and make them available over the Internet add urgency to this issue. Although the on-line access to such records will greatly increase convenience for those members of the public who use them, personal information like SSNs that is contained in some of these records will also be made readily available to the public. Addressing the issues of whether the traditional rules of public access should apply to electronic records, particularly those found on the Internet, is both urgent and vital. Without policies specifying ways to safeguard SSNs on the Internet, the potential for compromising individuals’ privacy and the potential for SSN misuse will increase significantly. Further, although improving safeguards for government use of SSNs and other personal information is important, even the most successful efforts by government agencies cannot eliminate the risk to individuals that their SSNs will be misused because SSNs are so widely used in the private sector as well. Any effort to significantly reduce the risk of improper disclosure and misuse of SSNs would require added safeguards and limits on private sector use and display of the SSN as well. Nonetheless, measures to protect privacy by public sector entities could at least help minimize the risk of misuse. Under current law, weaknesses in the safeguards applied to SSNs can be more readily addressed in the federal government than in the state and local governments. Federal laws lay out a framework for information systems security programs to help protect sensitive information overall. More specific to the SSN, the Privacy Act places broad restrictions on federal government use and disclosure of personal information such as the SSN. Improved federal implementation of these requirements can be accomplished within current law. On the state and local level, the Privacy Act does have a provision that applies to state and local governments albeit more limited than the requirements on the federal government. This requirement—that all levels of government provide certain information to SSN holders, such as how their SSNs will be used—is not consistently applied. However, strengthening enforcement of this provision of the act, while important, will not address the more basic protection issues related to information security and public display. Doing so by mandating stronger state and local government safeguards for such personal information as the SSN, however, confronts questions of jurisdiction and policy that are beyond the scope of this report. Nonetheless, such questions should be addressed quickly, before public sector information is compromised and before public records become fully electronic. Accordingly, we are making recommendations to OMB to help strengthen safeguards in federal agencies, and we are presenting a matter for congressional consideration to facilitate intergovernmental collaboration in strengthening safeguards at the state and local levels. The Privacy Act and other federal laws prescribe actions federal departments and agencies must take to assure the security of SSNs and other personal information. Because these requirements may not be uniformly observed, we recommend that the administrator, Office of Information and Regulatory Affairs, OMB, direct federal agencies to review their practices for securing SSNs and providing required information. As part of this effort, agencies should also review their practices for displaying SSNs. To better inform state and local governments of their responsibilities under section 7 of the Privacy Act, we recommend that the administrator, Office of Information and Regulatory Affairs, OMB, direct his staff to augment the Privacy Act guidance by specifically noting that section 7 applies to all federal, state and local government agencies that request SSNs, or take other appropriate steps. To address SSN security and display issues in state and local government and in public records, including those maintained by the judicial branch of government at all levels, the Congress may wish to convene, in consultation with the president, a representative group of federal, state and local officials including, for example, state attorneys general, county recorders, and state and local chief information officers, selected members of the Congress, and state or local elected officials, to develop a unified approach to safeguarding SSNs used in all levels of government and particularly those displayed in public records. This approach could include recommendations for congressional consideration. GAO could assist in identifying representative participants and in convening the group. We requested comments on a draft of this report from the director of OMB and the commissioner of SSA or their designees. We also requested that other officials review the technical accuracy of their respective agency or entity activities discussed in the draft, and we incorporated their changes where appropriate. SSA officials informed us that they would not provide written comments on the draft because the report does not make recommendations to the agency and comments were not required. However, we were told that the deputy commissioner shares the concerns expressed in the report and agrees with the conclusions. We did not receive written comments from the OMB director; however, other OMB officials provided us oral comments on the draft. They generally agreed with our recommendation that OMB direct federal agencies to review their practices for securing SSNs and providing the required information. In regard to our recommendation that OMB augment Privacy Act guidance or take other appropriate steps to better inform state and local governments of their responsibilities under section 7 of the Act, OMB officials told us that they are unsure of the need for additional OMB guidance in this area. They indicated that guidance on section 7 already exists in a publicly-available format on the Justice Department's Web site. In addition, they believe the section 7 provision is quite short and appears to be fairly self-explanatory. As the guidance in the Justice Web site indicates, some interpretive issues have arisen in litigation; however, OMB officials said the Justice guidance readily explains those issues. In addition, they said, the report does not indicate substantive areas where additional interpretive guidance is needed. However, they noted that the report does suggest that state and local officials may not be aware of section 7 provisions. In that case, they said increasing awareness of these legal requirements may warrant further consideration. Accordingly, OMB plans to consider, in consultation with other federal agencies, options for increasing state and local officials’ awareness on this subject. Although OMB correctly points out that the overview of the Privacy Act on the Department of Justice Web site refers to the requirements of section 7, we believe our finding that a significant percentage of state and local agencies reported they do not routinely provide individuals with the information required under section 7 supports the need for additional action. We agree that state and local officials may not be aware of section 7 requirements, and we believe there is a need to increase the awareness both of state and local officials administering the programs and of those monitoring compliance at the state and local levels. Because OMB is the federal agency responsible for assisting with and overseeing the implementation of the Privacy Act, we believe it should take the lead on increasing state and local awareness of section 7. However, we recognize that OMB’s role with respect to state and local governments is limited and support the agency’s idea to act in consultation with other federal agencies to take other steps it deems appropriate to accomplish this increased awareness. We are sending copies of this report to the Honorable Jo Anne B. Barnhart, commissioner of SSA, Mr. Mitchell E. Daniels Jr., the director of OMB, and others who are interested. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. The major contributors to this report are listed in appendix IV. To complete the objectives for this assignment, we used a combination of in-depth interviews, site visits, and mail surveys. To gain a preliminary understanding of how governments use and protect SSNs and to help design our survey and site-visit questions, we met with a number of government agencies, associations, and privacy experts. At the federal level, we interviewed officials from OMB, the Office of Personnel Management, SSA, and the FTC. At the state level, we interviewed officials from the National Governors Association, the National Association of State Auditors, Comptrollers, and Treasurers, the American Association of Motor Vehicle Administrators, the National Conference of State Legislatures, and the National Association of State Chief Information Officers, which represents state chief information officers, and the state of Maryland. At the county level, we interviewed officials from the National Association of County Election Officials, Clerks, and Recorders, the National Association of Counties, and Fairfax and Fauquier Counties, Virginia. We also met with or contacted officials/organizations regarded as experts in the privacy area, which included a privacy consultant and an official from the Privacy Journal. In addition, we reviewed published reports and studies on SSN use and privacy issues. To gain an understanding of the requirements for both using and protecting SSNs, we reviewed pertinent federal legislation, federal guidance and directives regarding the use and handling of SSNs and other personal information, GAO reports, and various studies of state SSN use and privacy laws. To develop our criteria for assessing the actions government agencies take to protect SSNs, we drew from applicable federal laws, primarily the Government Information Security Reform provisions of the Fiscal Year 2001 Defense Authorization Act, OMB Circular A-130 and other guidance, and the Federal Information System Controls Audit Manual that specifies guidelines for federal agencies to safeguard sensitive information stored in computer systems. We also drew from our work on best practices used by private companies and public sector organizations identified in our Executive Guide: Information Security Management, Learning From Leading Organizations. Finally, we held a 1-day seminar on innovative practices used by the private sector to protect sensitive information. Attendees included officials from the Private Sector Council and member firms, including Kaiser Permanente, a health care provider; State Street Bank, a large commercial bank; and Allstate, an insurance company. Our surveys, site visits, and in depth interviews with officials of targeted federal, state, and county programs focused on the following areas: how SSNs are used (for both programmatic and personnel-related purposes), how and why SSNs are shared with other entities (including contractors), what information programs provide individuals when agencies collect and use their SSNs, how agencies maintain and safeguard SSNs and other personal data, and the cost for minimizing use or implementing alternatives to using SSNs. At the federal level, we surveyed all 14 cabinet-level agencies plus the Environmental Protection Agency, the Small Business Administration, SSA, and the federal court system. The latter three agencies and the federal court system were added for breadth of coverage to ensure that we covered regulatory agencies, independent agencies, and courts. We asked that each agency identify the five programs that maintain documents containing the SSNs of the largest number of individuals and then asked representatives of those programs to complete a questionnaire. To the extent that an agency had a program whose primary purpose was to conduct research that used records with individuals’ SSNs as part of that research, we asked that it be substituted for one of the five programs. Finally, we distributed a different survey to agency personnel offices to determine how agencies used and protected the SSNs of their employees. The federal agency and the federal personnel questionnaires were each pretested at least twice. Because we don’t know how many programs within the federal agencies we surveyed maintain records containing individuals’ SSNs, we cannot calculate a response rate for the federal agency questionnaire. In total, 58 federal programs, agencies, or courts returned a completed questionnaire. Of the 18 federal agencies to which we sent a questionnaire, 15 returned a completed questionnaire for at least one program. We now know that one of the 18 agencies that received a questionnaire did not have any programs that maintained records containing SSNs. In addition, 18 federal personnel offices received our personnel questionnaire, and of those 15 returned completed questionnaires, for a response rate of 83 percent. At the state level, our work covered all 50 states and the District of Columbia. In each state, we distributed the surveys to seven preselected programs or functions that were identified by others as likely to be ones that maintained documents containing the SSNs of the largest number of individuals. These included the departments of (1) human services, (2) health services and vital statistics, (3) education, (4) labor and licensing, (5) judiciary, (6) public safety and corrections, and (7) law enforcement.Finally, we also surveyed each state’s personnel office. The state department and personnel questionnaires were each pretested twice. In total, 424 state programs or functions were mailed a questionnaire, and of those 307 returned completed questionnaires, for a response rate of 72 percent. In addition, of the 51 state personnel offices that were mailed our state personnel questionnaire, 42 completed and returned it, for a response rate of 82 percent. At the local level, we selected 90 counties with the largest populations in the nation as our focus. Our goal was to choose areas with large numbers of persons that would be affected by the way local government agencies handled SSNs. We again focused on those preselected programs or functions that county officials reported as ones that maintained documents containing the SSNs of the largest number of individuals. These are, in general, the same programs or functions that we focused on in the states; we also surveyed the county clerk or recorder, which was identified as a place that maintained a large number of records containing individuals’ SSNs. Finally, we surveyed each county’s personnel office. The county department and personnel questionnaires were each pretested twice. In total, 488 county programs or functions were mailed a questionnaire, and of those 344 returned completed questionnaires, for a response rate of 70 percent. In addition, 90 county personnel offices were mailed our county personnel questionnaire, and of those 64 completed and returned it, for a response rate of 71 percent. In-depth interviews and site visits to federal agencies, states, and counties were used to supplement the survey data by providing more detailed information on the uses of SSNs, reasons for their use, and challenges encountered in protecting them. Interviews and site visits for federal programs were selected based on breadth of coverage, novel or innovative steps to protect SSNs, and special interest by the requestors. We conducted in-depth interviews with officials from the (1) Federal Court System - Administrative Office of the U.S. Courts; (2) Centers for Medicare and Medicaid Services; (3) Department of Education’s Student Financial Assistance; (4) Department of Housing and Urban Development’s Low Income Housing Programs; (5) DOD Commissaries; and (6) the U.S. Marshals Service. At the state level, we conducted site visits to the states of Texas, Washington, and Minnesota. We selected these states because their legal framework and practices regarding the openness of government records and the privacy of individuals varied. Texas has a strong open records tradition; Washington state has an executive order in place that has serves to limit the availability of certain personal information; and Minnesota has a privacy law that also serves to limit the availability of certain types of information. At the county level, we conducted site visits to Harris County, Texas; King County, Washington; and Aitkin County in Minnesota. We visited counties located in states we selected for site visits to help us understand how state policy affects local practices. Also, we selected Aitkin County, Minnesota to gain the perspectives of a smaller rural county. During our site visits, we met with officials from the departments or agencies that were considered heavy users of SSNs. We also met on two occasions with a group of county clerks and recorders from urban and smaller rural counties. To provide information on the role of government use of SSNs in identity theft, we incorporated information provided by GAO’s Tax Administration and Justice group, which was obtained as part of a broader effort to describe the prevalence and cost of identity theft. The information we used from that effort is based on interviews with and documentation provided by the FTC, SSA’s Office of Inspector General, IRS, Federal Bureau of Investigation, U.S. Secret Service, and credit bureaus among others. We performed our work at SSA headquarters in Baltimore, Maryland; at Maryland state offices in Annapolis, Maryland; Washington D.C.; and at selected locations including Austin, Texas; Harris County, Texas; Olympia, Washington; King County, Washington; St. Paul Minnesota; and Aitkin County Minnesota. We conducted our work between February 2001 and March 2002 in accordance with generally accepted government auditing standards. The following federal laws establish a framework for restricting SSN disclosure: The Freedom of Information Act (FOIA) (5 U.S.C. 552) – This act establishes a presumption that records in the possession of agencies and departments of the executive branch of the federal government are accessible to the people. FOIA, as amended, provides that the public has a right of access to federal agency records, except for those records that are protected from disclosure by nine stated exemptions. One of these exemptions allows the federal government to withhold information about individuals in personnel and medical files and similar files when the disclosure would constitute a clearly unwarranted invasion of personal privacy. According to Department of Justice guidance, agencies should withhold SSNs under this FOIA exemption. This statute does not apply to state and local governments. The Privacy Act of 1974 (5 U.S.C. 552a) – The act regulates federal government agencies’ collection, maintenance, use and disclosure of personal information maintained by agencies in a system of records. The act prohibits the disclosure of any record contained in a system of records unless the disclosure is made on the basis of a written request or prior written consent of the person to whom the records pertains, or is otherwise authorized by law. The act authorizes 12 exceptions under which an agency may disclose information in its records. However, these provisions do not apply to state and local governments, and state law varies widely regarding disclosure of personal information in state government agencies’ control. There is one section of the Privacy Act, section 7, that does apply to state and local governments. Section 7 makes it unlawful for federal, state, and local agencies to deny an individual a right or benefit provided by law because of the individual’s refusal to disclose his SSN. This provision does not apply (1) where federal law mandates disclosure of individuals’ SSNs or (2) where a law existed prior to January 1, 1975 requiring disclosure of SSNs, for purposes of verifying the identity of individuals, to federal, state or local agencies maintaining a system of records existing and operating before that date. Section 7 also requires federal, state and local agencies, when requesting SSNs, to inform the individual (1) whether disclosure is voluntary or mandatory, (2) by what legal authority the SSN is solicited, and (3) what uses will be made of the SSN. The act contains a number of additional provisions that restrict federal agencies’ use of personal information. For example, an agency must maintain in its records only such information about an individual as is relevant and necessary to accomplish a purpose required by statute or executive order of the president, and the agency must collect information to the greatest extent practicable directly from the individual when the information may result in an adverse determination about an individual’s rights, benefits and privileges under federal programs. The Social Security Act Amendments of 1990 (42 U.S.C. 405(c)(2)(C)(viii)) – A provision of the Social Security Act bars disclosure by federal, state and local governments of SSNs collected pursuant to laws enacted on or after October 1, 1990. This provision of the act also contains criminal penalties for “unauthorized willful disclosures” of SSNs; the Department of Justice would determine whether to prosecute a willful disclosure violation. Because the act specifically cites willful disclosures, careless behavior or inadequate safeguards may not be subject to criminal prosecution. Moreover, applicability of the provision is further limited in many instances because it only applies to disclosure of SSNs collected in accordance with laws enacted on or after October 1, 1990. For SSNs collected by government entities pursuant to laws enacted before October 1, 1990, this provision does not apply and therefore, would not restrict disclosing the SSN. Finally, because the provision applies to disclosure of SSNs collected pursuant to laws requiring SSNs, it is not clear if the provision also applies to disclosure of SSNs collected without a statutory requirement to do so. This provision applies to federal, state and local governmental agencies; however, the applicability to courts is not clearly spelled out in the law. The following tables provide additional information on the types of departments or agencies that reported maintaining records that are routinely made available to the public and, of those, the ones that reported that their public records contained SSNs. The following team members contributed to all aspects of this report throughout the review: Lindsay Bach, Jeff Bernstein, Jacqueline Harpp, Daniel Hoy, Raun Lazier, James Rebbe, Vernette Shaw, and Anne Welch. In addition, Richard Burkard, Patrick Dibattista, Joel Grossman, Debra Johnson, Carol Langelier, Minette Richardson, Robert Rivas, Ron Salo, Rich Stana, and William Thompson also made contributions to this report. | The Social Security number (SSN) was created in 1936 to track workers' earnings and eligibility for Social Security benefits. Because SSNs are unique identifiers and do not change, the numbers provide a convenient and efficient way to manage records. Government agencies are taking some steps to safeguard the number, but some protections are not uniformly in place at any level of government. Many of the state and county agencies responding to GAO's survey maintain records that contain SSNs; federal agencies maintain public records less frequently. At the state and county levels, some offices, such as state professional licensing agencies and county recorders' offices, have traditionally been repositories for public records that may contain SSNs. Some government agencies are trying to better safeguard the SSN by trying innovative approaches to protect them from public display. For example, some agencies and courts are modifying their processes or their forms so that they can collect SSNs but prevent the number from becoming part of the publicly available record. The most far-reaching efforts took place in states with a statewide initiative that established a policy and procedures designed to protect individuals' personal information, including SSNs, in all circumstances where they collect, store, and use it. |
The Office of Personnel Management (OPM) is tasked with providing human resources, leadership, and support to federal agencies to manage their human capital functions. For OPM to effectively perform this role, executive branch agencies are required to report information on their civilian employees to OPM and ensure that workforce data meet certain standards developed by OPM. OPM has developed these data to carry out its strategic goal of serving as a thought leader in data-driven human resource management and policy decision-making. The Enterprise Human Resources Integration (EHRI) system is OPM’s primary repository for human capital data to support these efforts. OPM developed EHRI to (1) provide for comprehensive knowledge management and workforce analysis, forecasting, and reporting to further strategic management of human capital across the executive branch; (2) facilitate the electronic exchange of standardized human resources data within and across agencies and systems and the associated benefits and cost savings; and (3) provide unification and consistency in human capital data across the executive branch. In addition, OPM’s updated system and integrated data were expected to accrue savings to the federal government, reduce redundancy among agency systems, streamline the various processes involved in tracking and managing federal employment, and facilitate human capital management activities by providing storage, access, and exchange of standard electronic information through a data repository of standardized core human capital data for most executive branch employees. While the personnel database predated the EHRI Data Warehouse, the payroll database was newly developed for OPM’s e-payroll initiative to consolidate agency payroll processes. The payroll database contains individual payroll records for approximately 2.0 million federal employees and is the primary governmentwide source for payroll information on federal employees. The records consist of data elements such as an EHRI ID for linking files, agency time charge categories, and pay rates. The consolidation of agency payroll processes—known as the e-payroll project—provided the opportunity for OPM to begin collecting standardized governmentwide payroll data. As part of the e-payroll initiative, OPM consolidated the operations of 22 federal payroll system providers for the 116 executive branch agencies into four primary providers—General Services Administration’s (GSA) National Payroll Center (NPC), the Department of Defense’s Defense Finance and Accounting Service (DFAS), Department of Interior’s Interior Business Center (IBC), and Department of Agriculture’s National Finance Center (NFC). Consolidation was undertaken to simplify and standardize federal payroll policies and procedures, and better integrate payroll with other human capital and finance functions. Most federal agencies rely on one of the four payroll service centers, DFAS, IBC, NPC, or NFC, to process employee pay. Payroll service centers receive employees’ bi-weekly time sheets which come from a variety of time and attendance (TA) systems from the agencies they service. Generally, TA systems allow employees to specify time spent on different work and leave categories, such as the number of regular or overtime hours worked or the number of annual leave or sick leave hours taken in a given pay period (PP). However, the level of detail regarding the exact nature of the work or leave time varies depending on agency policies and systems for recording employee work time. While the service centers have consolidated payroll reporting, there are still variations among the centers and the TA systems agencies use to submit employee time sheets. Some systems are maintained by the service center and employees from various agencies access those systems to record their hours. For example, GSA has only one TA system that agencies use to record work and leave hours. Other systems are maintained by the agency and may reflect specific TA accounting needs of the agency. For example, NFC processes time sheets from several different TA systems, some of which are agency specific, and DFAS processes payroll for the Department of Defense, Veterans Affairs, and others through systems including the Business Management Redesign (e-Biz) TA system and the Automated Time, Attendance, and Production System (ATAAPS) (see figure 1). In light of the significant variation in TA systems and service centers involved in processing TA information into payroll records, OPM established core standards for consistency in reporting and recording certain types of work and leave hours. These standards apply at the agency level as well as the service center level. Some of these standards are based on official leave authorized in statute. For example, federal employees are authorized to be absent from duty without a loss in pay or charge to leave for legal holidays and for activities such as jury duty, attendance at a military funeral, bone-marrow or organ donation, and certain union activities. Other standards are based on OPM guidance for excused absences due to inclement weather or blood donation, among others, charged as administrative leave. Agencies follow common recording practices for annual leave and sick leave. These core standards at the agency level are an important part of the process for reporting to OPM because they allow the service centers to collapse certain fields in a consistent way. While agencies may have specific time codes and time keeping practices to meet their needs, core standards for service center reporting dictate how these codes should be collapsed for reporting to OPM. For example, agencies may have detailed categories for various types of administrative leave, but segments of those charge codes apply generally to the category of administrative leave, enabling service centers to aggregate these data from TA systems. OPM relies on agencies and service centers to ensure that the data they submit are timely, accurate, complete, and compiled in accordance with OPM standards. However, federal internal control standards specify that even when external parties, such as service centers in this case, perform operational processes for an agency, management—in this case OPM— retains responsibility for the performance of responsibilities assigned to those organizations. Consequently, OPM management is responsible for understanding the controls each service center has designed, implemented, and operates for payroll processing and how the service centers’ internal control systems impact OPM’s internal controls for the payroll data. Underlying requirements for data standards and data quality efforts are the standards for internal control which apply to all executive branch government functions. According to Standards for Internal Control in the Federal Government , effective internal control systems have certain attributes, including reliable internal and external sources that provide data that are reasonably free from error and bias and faithfully represent what they purport to represent. Another attribute involves management evaluating both internal and external sources of data for reliability, and obtaining data on a timely basis so that they can be used for effective monitoring. Use of the EHRI payroll database has been limited because OPM has not made it widely available. This is in contrast to other, related OPM datasets, such as the EHRI personnel database, which OPM has prepared for use and made publicly available through multiple mechanisms including FedScope, an online tool for data analytics. Because the EHRI payroll database has potential to be used for accountability, research, and data-driven human resource management and policy decision making, making it available would support OPM’s strategic and open data goals. The EHRI payroll data have rarely been used since the database became operational in 2009. We identified four instances where the data have been used, primarily by OPM or by GAO to respond to Congressional requests for information. Specifically, (1) OPM used EHRI payroll data to calculate rough estimates of official time—paid time that employees spend on union-related activities—for its 2009 to 2012 reports to Congress; (2) we made similar use of EHRI payroll data to estimate use of official time in selected federal agencies in a 2014 report, which revealed limitations in OPM’s method of estimating the governmentwide costs of official time; (3) we used EHRI payroll data in a 2014 review that found inconsistencies in how agencies recorded and reported the use of administrative leave; and (4) we used EHRI payroll data in 2016 to report on the use of administrative leave at the Department of Homeland Security (DHS). In this final case, our use of EHRI data enabled a more detailed examination of DHS’s use of administrative leave and helped verify the reliability of the information obtained from DHS. Aside from these four instances, the data remain largely unused because OPM does not make the data available to the larger research community or to federal agencies. This is in contrast to other OPM data, such as the EHRI personnel data, which has been widely used since OPM made it available. OPM has taken specific steps to make these other human resources-related data available, but has not taken any of these steps for the EHRI payroll database. For example, OPM has made the EHRI personnel database available by (1) cleaning it and preparing it for statistical analysis; (2) integrating it with other data in a repository known as the EHRI Statistical Data Mart; (3) making deidentified data—that is, data without personally identifiable information—accessible through FedScope, an online data analytics tool that draws on data in the Statistical Data Mart; (4) listing data that are available (either by download or by request) on the OPM website and on Data.gov; and (5) sharing requested data with other parties, such as think tanks and academic researchers. Data in the EHRI Statistical Data Mart are also processed and repackaged to make them more available and usable. Specifically, data received by OPM from agencies and stored in the EHRI Data Warehouse are further processed and cleaned and placed into a format better suited for analysis. This process involves additional corrections and generates additional data elements likely to be useful for statistical analysis. These processed and prepared data are then submitted to the EHRI Statistical Data Mart, which forms the basis for FedScope. Both the FedScope analytics tool and the downloadable datasets are accompanied by documentation that clarifies the meanings of the data elements and limitations associated with the data. OPM also makes other EHRI data available through its website and Data.gov. Established in 2009, Data.gov is administered by the General Services Administration (GSA) as a public portal for government data in accordance with the government-wide open data policy. It includes information about and links to datasets from executive branch agencies. As of September 2016, users are able to find references to the EHRI personnel and retirement data on Data.gov. From Data.gov, users can follow links to the personnel data in FedScope and request access to the retirement data. OPM also offers a suite of analytic tools for agencies to perform workforce analyses and forecasting on the data in the EHRI Data Warehouse. Unlike the EHRI personnel and retirement data, the EHRI payroll data have not been made available in any of these ways. The documentation that accompanies the EHRI Statistical Data Mart specifically notes the absence of payroll data as a limitation, warning users that the data elements related to pay reflect only annualized rates of pay, and that employees’ actual pay may be lower or higher due to such factors as overtime or leave without pay, which would be addressed if the payroll data were integrated into the Statistical Data Mart. The four databases within the EHRI Data Warehouse were designed with linking identifiers to enable such integration. Even though this capability exists, for the past seven years, the payroll database has not been linked with other EHRI databases, is not incorporated into the Statistical Data Mart, and has been left largely unchecked and unused. Until the payroll data are made available, such as by incorporating them into the Statistical Data Mart and linking them to the other EHRI databases as designed, OPM will not be able to crosscheck data across the databases for accuracy, and the data will not benefit from the processing and preparation for statistical analysis that is performed for data in the Statistical Data Mart. Because reliability issues are often identified during use of the data, greater use of the EHRI payroll data by other parties would also have the benefit of helping to improve and establish the data’s reliability. As noted in GAO’s Assessing the Reliability of Computer-Processed Data, past users can be valuable sources of information about the completeness, accuracy, limitations, and usability of a dataset. For example, our prior reports that utilized the EHRI payroll data uncovered reliability issues, and OPM itself discovered a reliability issue when it attempted to use the data to analyze the use of sick leave (an issue we describe later in this report). The EHRI payroll data have potential to be used for research and analysis on topics related to OPM’s strategic and open data goals. In particular, EHRI payroll data include detailed information on pay, incentives, leave, work activities, telework, and other aspects of the federal workforce that could support OPM’s strategic and open data goals for data-driven research in areas such as audits and human resource analytics and decision making, according to our review of literature and interviews with OPM officials. OPM’s strategic goals call for the agency to develop and provide access to data systems that support human resources–related research and analytics both within and outside of OPM. As part of its strategic goal to serve as the thought leader in research and data-driven human resource management and policy decision making, OPM’s strategies include (1) developing data systems to support such analysis and (2) fostering partnerships with work groups, agencies, universities, and industry to access and analyze data. As part of its strategic goal to manage information technology systems efficiently and effectively, OPM’s strategies include providing greater access to human resources data and enabling data analytics to inform policy and decisions. In addition, OPM has open data goals that involve making data available and usable, in part to help ensure governmentwide accountability. In particular, OPM’s flagship enterprise information management initiative, a part of its most recent Open Government Plan, includes (1) ensuring that data are easily retrievable and highly usable for analytics and decision making, (2) promoting a culture of collaboration and partnerships with external stakeholders, and (3) releasing data to foster a broader conversation with the public by allowing third parties to conduct their own analyses and even create their own applications using OPM data. The utility of the EHRI payroll data for governmentwide accountability is demonstrated in audits that have been conducted using agency-specific data. For example, the Department of Defense Inspector General (DOD IG) used agency payroll data to conduct an audit of the Defense Finance and Accounting Service (DFAS), DOD’s payroll provider. Specifically, by using agency payroll data, the DOD IG was able to identify improper payments to federal civilian employees. Improper payments occur when funds go to the wrong recipient, the recipient receives an incorrect amount of funds, or the recipient uses the funds in an improper manner. DFAS determined that payments were being made to accounts with invalid social security numbers, to employees under the legal employment age, and to multiple employees into the same bank account. These improper payments amounted to more than $15 million over a six-year period. We identified similar audits for improper payments conducted by the Small Business Administration (SBA) and the Social Security Administration Office of Inspector General (SSA IG). Agency-specific payroll data have also supported audits of the use of retention incentives. For example, in 2011, the Department of Veterans Affairs (VA) Office of Inspector General examined retention incentives paid to VA employees in fiscal year 2010. Using agency payroll data, it found that officials responsible for reviewing and approving retention incentives did not adequately justify and document awards in accordance with VA policy. Also, VA officials did not always terminate retention incentives at the end of set payment periods. As a result of their review, the VA Inspector General questioned the appropriateness of nearly 80 percent of incentives it reviewed. These incentives totaled about $1.06 million in FY 2010. Agency-specific payroll data have also been used to audit agencies’ reports on the amounts they have withheld or deducted from employees’ pay for retirement, health benefits, and life insurance. For example, the DOD IG, in collaboration with OPM, has checked payroll data against Official Personnel Files to assess whether withholdings from pay appear reasonable for employees in multiple departments whose pay is processed through DFAS. The Department of Agriculture IG has done a similar audit for employees in multiple departments whose pay is processed through the National Finance Center (NFC). Identifying payroll fraud, monitoring retention incentives, and assuring accuracy of withholdings are issues that can affect all agencies. The EHRI payroll database contains data elements necessary for conducting such audits, with the advantage that it includes data for all executive branch agencies, thus enabling governmentwide reviews. Our review of the literature and interviews with OPM officials suggest that key elements in the EHRI payroll data have the potential to be used to understand a variety of human capital outcomes in the federal government. We identified studies that could have benefitted from the availability of the EHRI payroll data to examine (1) the relationship between demographic characteristics and pay disparities and costs of compensation; (2) the use of flexibilities, such as telework, on employee retention and motivation; and (3) the use of various types of leave. Because the EHRI payroll data were unavailable, the studies we identified tended to make use of data that were less precise, less directly relevant, or less comprehensive than the EHRI payroll data. These studies illustrate some of types of research that could be done more precisely or more comprehensively if EHRI payroll data were made available. According to our review of the literature, EHRI payroll data could also inform studies of disparities in pay among demographic groups in the federal workforce and enable more precise analysis of the costs of federal compensation. For example, a 2015 article in the Journal of Public Administration Research and Theory compared long-term professional mobility between federal employees who received veterans’ preferences and those who did not. To measure mobility, the study relied on employees’ General Schedule (GS) grade levels from OPM’s Personnel data system. A 2012 study in the Internal Review on Public and Non- Profit Marketing examined the relationship between performance-based pay initiatives and discrimination complaints in selected federal agencies using agency data related to equal employment opportunity records. A 2009 study in the American Review of Public Administration examined potential drivers of the narrowing pay gap between men and women in the federal government, including changes in seniority, differences in fields of study, and women’s migration into traditionally male fields. To measure the pay gap, these researchers relied on data on employees’ self-reported annual salary. Although these studies of disparities provide insight into the impact of human capital decisions concerning hiring and pay, they had to rely upon GS grade levels, administrative data, or average annual salary levels to assess outcomes. Each of these measures has limitations that could have been addressed if EHRI payroll data had been available. This is because the EHRI payroll data were designed to provide standardized, governmentwide information by pay period regarding actual pay, incentive pay, telework and leave hours, and numerous other data elements related to federal work activities and compensation. The data used in the three studies, however, did not provide precise measures of compensation because pay ranges overlap grade levels in the GS system and individuals with different grades can receive similar pay rates. In addition, approximately 30 percent of federal employees are not covered by the GS system and would be excluded from such assessments by default. Available measures of annualized salary used in these studies are also imprecise. An employee’s actual earnings may include other forms of pay (for example, overtime or shift differentials) not included in adjusted basic pay, or may be less than the annualized rate because of the employee’s work schedule (for example, less than full time non-seasonal) or individual circumstances (for example, leave without pay). Also, incorporating administrative records of pay into analyses can be challenging because methods of collecting and reporting otherwise similar payroll information varies significantly across federal agencies. In contrast, if EHRI payroll data had been available for these studies, they could have addressed some of these limitations because the data are designed to reflect actual pay, including any special pay, overtime pay, or other incentives and awards an individual might receive each pay period. They are also centralized and designed to be consistent across agencies. Studies of the impact of workforce flexibilities, such as telework, on employee retention and motivation also demonstrate potential uses of the EHRI payroll database. A 2010 article in Public Manager described how agencies, such as the U.S. Nuclear Regulatory Commission, have used telework to improve retention of employees with critical skills. Similarly, a 2013 study published in the American Review of Public Administration found that federal employees who engaged in frequent or infrequent telework were no more likely than their counterparts who do not telework to express an intention to leave, while a 2012 study in the same journal concluded that employees at the Department of Health and Human Services who telework were not significantly more motivated than those who choose not to telework. Instead of using time spent teleworking drawn from time and attendance records, both of these studies relied on data from the Federal Employee Viewpoint Survey, in which federal employees self-report whether they engage in telework “frequently” or “infrequently.” These studies would have benefitted from EHRI payroll data, which include telework fields that, if reliable, could yield more precise findings by allowing researchers to use the actual number of hours or days employees teleworked per pay period, rather than employees’ generalized descriptions of their telework frequency. The telework fields in the EHRI payroll data could also help OPM to meet statutory requirements to monitor and report on governmentwide use of telework, and OPM has recently issued a memo to agencies indicating that it will start using the payroll data to do so. In our 2012 report on OPM’s ability to meet this requirement, we found that estimates of telework among federal employees were limited to data calls to agencies because some agencies did not track telework in their time and attendance systems. In that report, we concluded that the accuracy of telework participation and frequency data for some agencies was questionable. The EHRI payroll reporting requirements now include data elements for continuous and episodic telework. The availability and use of these data elements for analysis would allow for more accurate and efficient assessments to meet statutory reporting requirements, and would be of use to policymakers. Our review of literature also indicates that the EHRI payroll data are potentially useful for analyzing the use of leave. OPM officials told us that they would use the data to analyze use of sick leave and annual leave across the federal government if they had sufficient resources. In the past, officials conducted such analyses by requesting data on an ad hoc basis from agencies, but, according to OPM officials, that process was too resource-intensive to continue. Our review of recent studies suggests that researchers share OPM’s interest in these topics. For example, a 2015 study in the Journal of Occupational and Environmental Medicine used time and attendance records from an unidentified federal agency to examine sick leave use among different demographic groups. Using survey data, another study examined the impact of different office designs on sick leave use among Swedish workers, finding that open office plans were associated with significantly higher reported rates of sick leave use. However, research on trends in leave use and impacts of certain policies on leave use across the federal government has been limited in the past by a lack of comprehensive and standardized leave use data, which could be addressed if EHRI payroll data were made available to agencies and researchers. Collectively, these studies demonstrate the value of fields within the EHRI payroll database—such as pay, telework, leave, and other compensation data—in assessing human capital outcomes. However, in all of the studies we identified, researchers relied on proxy, annualized, or self- reported measures, as opposed to actual measures of pay, telework, leave, and other key variables. Further, researchers typically relied upon data covering a limited number of federal agencies or employees. Compared to the EHRI payroll data, these sources do not provide governmentwide data or the same level of precision or detail for assessing policy outcomes among federal employees, which can affect the results of analysis. Studies relying on annualized salary may over or understate actual compensation given the timing of personnel actions, such as hires, separations, promotions, and leave, which can affect the actual amount of pay employees receive in a year. Studies relying on information about a small subset of the federal workforce may not provide reliable insights about overall federal human capital trends or policy effects. We and others have noted the importance of appropriate methods and data in comparing benefits and wages among federal employees and their private-sector counterparts. Other data sources that have been used instead of the EHRI payroll data—such as OPM’s EHRI personnel database and the Federal Employee Viewpoint Survey (FEVS)—also have limitations that could be addressed if the payroll data were made available. For example, the EHRI personnel database does not contain information on the amounts of time spent on sick leave, annual leave, administrative leave, official time activities, and telework, among other variables relevant to compensation and time use studies. OPM’s FEVS—a governmentwide database of federal employee perceptions on their agency’s policies and practices— contains data elements related to pay, but limitations on the reliability of these data elements have been identified. In addition to the specific studies we reviewed in detail, we identified hundreds of articles on topics that correspond to EHRI payroll data fields. For example, we identified 276 articles with the phrases “administrative leave,” “annual leave,” “court leave,” “family leave,” “medical leave,” “military leave,” or “unpaid leave” in their titles that have been published in peer-reviewed journals since 2009, when OPM launched EHRI. In addition, we identified 37 peer-reviewed studies with the term “telework” in their titles and six with the phrase “performance- based pay” in their titles. Although an in-depth assessment would be necessary to determine the reliability of individual fields for any of the specific purposes noted above, our basic reliability testing suggests that several key fields in the EHRI payroll data are reasonably complete and contain data within expected ranges—and therefore would have potential to support research on these topics if the EHRI payroll data were made available. (See appendix I for more detailed results of our electronic tests of EHRI payroll data reliability.) As long as the EHRI payroll data remain unavailable, federal pay and work-related research will be limited and OPM will continue to miss opportunities to support its strategic and open data goals. OPM has designed and implemented some control activities to ensure the reliability of EHRI payroll data, but weaknesses in these controls limit OPM’s ability to fully leverage these data in support of its mission. As described earlier, we assessed OPM’s internal controls on the payroll data against two of the five elements of the Standards for Internal Control in the Federal Government: control activities and monitoring. Control activities are the actions management establishes through policies and procedures to achieve its objectives, including appropriate documentation of internal controls. Control activities help agencies ensure the reliability of data within information systems, such as the EHRI payroll system. Monitoring is necessary to promptly resolve the findings of audits and other reviews so that corrective actions necessary to achieve objectives are taken in a timely manner. A deficiency exists when the design, implementation, or operation of a control does not allow management or personnel to achieve control objectives or address related risks. While OPM internal controls provide some assurance of the reliability of EHRI payroll data, weaknesses in the design or implementation of certain control activities and monitoring controls for the EHRI payroll database increase the risk of reliability issues that may limit OPM’s ability to fully leverage the data in support of its mission. Specifically, (1) weaknesses in control activities have resulted in limited quality checks and acceptance of unreliable data into the EHRI payroll database; and (2) weaknesses in monitoring activities have resulted in failure to address these reliability issues and increased risk that these issues may compound over time. Table 1 lists these control components and related activities, along with an assessment of whether they provide reasonable assurance of OPM’s ability to achieve its objectives in these areas. OPM guidance includes requirements for automated controls in support of data quality, such as defining data parameters and tolerances, identifying data errors, checking for completeness, and taking corrective actions when necessary. According to EHRI documentation and OPM officials, automated edit checks are performed by the data system software to check the validity of individual data elements, the proper relationship of values among associated data elements, and data format specifications. The rules check the value and format of fields, including record identifying fields, such as birthdate and agency, as well as non-record identifying fields, such as hours of leave. Specifically, they check to make sure that fields are formatted as numbers, dates, or text, depending on the designed content of the field, and that all values in a field fall within a defined range of possible values. Further, OPM applies three relational edits to ensure (1) that actions taken to add an employee to the system are not associated with an employee already in the system, (2) that actions taken to correct a record are associated with an existing record, and (3) that actions taken to delete a record are associated with an existing record. See table 2 for a description of the fields that are checked, the rule that is applied, and the action taken if the rule identifies an error. Data that fail OPM’s automated checks are considered errors, and the edit rules for the EHRI payroll system specify that data with error rates greater than 3 percent will not be accepted. However, according to OPM officials, the payroll data enters the EHRI Data Warehouse with very few other edits. OPM officials noted that they intend to define additional edits that could be applied to the payroll data during the data loading process. Federal standards for internal control state that management should design control activities to achieve objectives and respond to risks. However, due to the limited nature of these edits EHRI payroll data have a higher risk of data reliability issues that may limit OPM’s ability to fully leverage the data in support of its mission. For example, the results of our electronic testing of data from 2010-2015 found fields with missing data, logical errors, and out-of-range values. (For selected results of electronic testing of the data, see appendix I.) OPM’s automated rules also require the system to check the number of records for each agency every pay period, and are designed to reject submissions and generate an automated report for the service centers when the number of records has changed by more than 5 percent. While the reports are generated, OPM officials told us that resource constraints preclude them from having the same level of controls in place for the payroll data as they do for other EHRI data, including lack of capacity to follow up on missing payroll submissions. Federal standards for internal control state that management should evaluate information processing objectives to meet the defined information requirements, such as for completeness and accuracy. However, the EHRI payroll system accepted multiple submissions of data that should have been rejected by this rule. Specifically, our testing of the data found that, for nine separate pay periods in fiscal year 2014, payroll data records for agencies contained less than 1 percent of the affected agency’s total civilian workforce. In all, 17 of the 24 CFO Act agencies were affected by this problem at least once in fiscal year 2014 (see table 3). Without these data, government- wide analytics that cover any of these impacted dates will be similarly limited and incomplete. Although these submissions should have been flagged and rejected by OPM’s edit check for having a greater than 5 percent change in the number of records from one pay period to the next, OPM was unaware of the missing data until we identified the problem. This was due, in part, to inadequate monitoring controls, which are described in more detail below. In addition, while OPM designed these control activities to meet requirements for completeness and accuracy, the controls have not always met their objective and therefore do not provide sufficient assurance that completeness requirements will be achieved. Failing to evaluate information processing objectives to see if they meet the defined information requirements for completeness increases the risk that some EHRI payroll data will be unreliable. OPM has established roles and responsibilities for users and other controls safeguarding accountability for data quality and security, and maintains information on data access and use activity. As designed, these user controls are intended to provide reasonable assurance that control objectives will be achieved if OPM monitors them. According to OPM officials, EHRI payroll database users must complete an application to gain access and service provider points of contact are given access credentials once access forms are submitted and approved. Applications of users requiring administrative privileges on information system accounts receive additional scrutiny. In addition, OPM documentation establishes processes for updating the list of authorized users when accounts are created, deactivated, or deleted—for example, specifying that account passwords are to expire after 60 days and that accounts that are inactive for 60 days are to be deactivated. OPM has also designed controls to capture and save some metadata tied to data loading and data provider submissions and user access. Automated processes capture metadata on user access and those logs are stored in system audit tables which are archived on a monthly basis and retained indefinitely, according to OPM officials. Logs detailing access for the three most recent months are available online. Within the data warehouse program where payroll data reside, applications have auditing functionality for user activity which captures what the user did as well as what was accessed. Web application activity is also tracked, and logs are retained indefinitely. Reports issued to providers can be reconstituted in real time and this information can be used for investigation. However, OPM officials told us they have not used this information for such investigations or reviews. As a result of this incomplete implementation of access controls, OPM does not know whether these controls are working appropriately. The two primary sources of documentation that guide submissions of EHRI payroll data into the system—the Guide to Human Resources Reporting (GHRR), and the Guide to Data Standards Part B (GDS)—are not up-to-date and do not provide sufficient assurance that control objectives will be achieved. OPM relies on the service centers and agencies to assure the accuracy of payroll data submissions, as outlined in these documents. For example, the GHRR outlines each data element, its required format, and whether it must be included in the database. The GDS outlines the required format for submissions to EHRI, including the file content, notification of transmission to OPM, file naming conventions, and transmission frequency. The GDS also acknowledges that the edits outlined for service centers and agencies in that document constitute the minimum required level of quality control and encourages agencies to supplement them based on the specifics of their internal programs and operations. The GHRR was last updated July 2013 to reflect the inclusion of telework variables, but the GDS has not been updated since March 2012. Given the changes to the system and the control weaknesses noted above, OPM officials noted that all payroll data standards will have to be reworked to ensure they are robust for data collection and programming by the payroll providers. OPM officials did note their intention to update these guides to align with system, regulatory, and other changes, but did not have a detailed plan or timeframe for doing so. Federal standards for internal control state that management should document internal controls to ensure that all transactions, documentation, and records are properly managed and maintained. OPM cannot ensure that the data quality control changes made to the system are fully implemented without updating its guidance documents. Out-of-date documentation does not provide sufficient assurance that control objectives will be achieved. Until OPM updates this documentation, it faces increased risk that data submissions will not be consistent with current requirements and recent changes to the system, which could affect the reliability of data submissions. As described above, the EHRI payroll system is designed to reject data and produce data quality reports when data error rates exceed 3 percent or when the number of records at an agency changes by more than 5 percent from one pay period to the next. According to OPM officials, biweekly EHRI data quality control reports and error files are made available to payroll providers on the EHRI portal. This quality control reporting is kept in the EHRI Data Warehouse indefinitely and the quality control reports issued to providers can be reconstructed. The GHRR directs payroll providers to monitor these reports for deviations from previous norms and analyze them to identify potential issues in systems that gather and send EHRI data from the agency to OPM. OPM officials told us that they do not monitor these reports to identify and resolve problems, and resource constraints prevent the agency from following up with the payroll providers. While inconsistent implementation of control activities allowed incomplete data to be accepted by the EHRI system, this limitation in monitoring controls led these incomplete submissions to remain undetected and unaddressed by OPM. In addition, without timely review and correction of problems identified in these reports, OPM risks errors compounding with each biweekly data submission, as the error tolerance checks involve comparison of each new submission to the most recent submission, which itself may have been incomplete. Further, without timely identification and correction of such problems, missing data may not be recoverable. For example, in response to the missing data issue noted above, OPM contacted the relevant service centers to locate the missing files. However, service centers only retain data for 18 months from the original date of submission. If the controls were working as designed, the service center would have been required to provide a corrected submission before the end of this retention period, and OPM would have reasonable assurance that the data for these pay periods were complete. Because of the delay in identifying this error, when OPM finally did request the data from the service centers, corrected data submissions were expected to require a significant amount of work because the retention period had passed. OPM was unable to provide the data within the time frames of this engagement and it is unclear whether OPM will be able to retrieve the missing data from the relevant service centers. Federal standards for internal control state that management should establish monitoring activities for the internal control system and evaluate the results, and should remediate identified internal control deficiencies on a timely basis. Without appropriate efforts to review and respond to system generated reports, OPM does not have sufficient assurance that the control objective will be achieved and the risk of submissions of inaccurate or incomplete data is increased. While OPM’s internal controls provide some assurance of the reliability of some of the EHRI payroll data, the weaknesses in control activities (controls for completeness, accuracy, and validity of information processing and appropriate documentation) and monitoring controls (ongoing during normal operations) may increase the risk for data reliability issues to arise and persist in the EHRI payroll data. We have also identified several data reliability issues through electronic testing of EHRI payroll data, in past GAO work, and through interviews with OPM officials. Collectively, these issues present challenges for fully leveraging the EHRI data, and may limit OPM’s ability to utilize the data for some analyses in support of its mission and strategic goals. We found a variety of potential data reliability issues from our electronic testing of EHRI payroll data, as illustrated in appendix I. In some cases, these issues indicate the potential for reliability issues that may limit OPM’s ability to fully leverage the data in support of its mission. For example, we found that the EHRI payroll data includes records for six entities that should not be in the system due to exemptions from OPM reporting requirements, as shown in table 4. When using EHRI payroll data, the unintentional inclusion of these entities could impact some analyses and limit OPM’s ability to draw valid conclusions from the data. We also found a small number of instances of social security numbers being assigned to multiple EHRI records, as shown in table 5 below. When using EHRI Payroll data, this could indicate that some individuals appear in the data more than once, potentially impacting some analyses and limiting OPM’s ability to draw valid conclusions from the data. In a 2014 report, we found that weakness in OPM’s documentation for transactions and internal controls led to inconsistent reporting of administrative leave data and inclusion of some excepted agencies’ data in payroll feeds. Specifically, in our report of agencies’ use of administrative leave we found differences between agencies’ leave recording practices and what OPM officials consider paid administrative leave. In response, OPM issued guidance to agencies to review how they record administrative leave and clarify that administrative leave should not be routinely used for an extended time. This guidance can help agencies and payroll providers to provide more consistent data on administrative leave, and improve the usefulness of EHRI payroll data for related analyses. OPM officials also told us about data reliability issues beyond those identified in this review. For example, OPM officials told us that, in 2015, they discovered a problem related to data on sick leave. Specifically, due to a programming error, the data received from payroll providers that sum the number of sick leave hours an employee used in a year was populating an unrelated field in the EHRI payroll database. As a result, according to OPM officials, the amount of sick leave an employee used in any given year was not accurate. One of these officials told us that this problem may also apply to other variables. This suggests that OPM’s edit checks, which were designed to maintain a minimum level of quality control, may not sufficiently reduce the risk of these types of errors. OPM officials told us they had plans to update EHRI security protocols, payroll documentation, testing for reliability issues, and data standards, but OPM has not documented these plans or created a schedule to implement them. For example, OPM officials told us that they planned to update their documentation beginning in FY 2016, working collaboratively with OPM’s program policy office, federal agencies, and shared service centers, and other stakeholders. Although the EHRI payroll data was not a part of the 2015 OPM data breach, agency officials told us that the agency is evaluating its current security posture and making necessary changes to protect the privacy and integrity of all the data they manage. According to OPM officials, these plans include preparing to deploy a new secure portal to applications and tools; improving use of encryption; masking and redaction when appropriate and prudent; consolidating data from multiple data sets into more secure databases; utilizing better and more secure user management tools and audit trail logging; and providing new forms of user authentication, among other potential security and access measures. OPM officials also said they are planning to correct the issue they had identified with the sick leave variable and that they were in the process of testing other variables to see if they had the same problem. However, OPM officials told us that these actions would require resources and reprioritization of the existing workload, and that a project plan and timeframes had not yet been developed. Further, OPM officials noted that the agency has a critical leadership role in addressing the complete data life cycle, and that agencies and service centers also play a critical role in assuring data quality. Accordingly, OPM officials said they were seeking a comprehensive solution that includes agency and service center actions to ensure accurate data are submitted to EHRI. As yet, OPM has not linked EHRI payroll correction activities back to specific agency objectives or created a schedule for implementing these changes. GAO’s schedule assessment guide notes that a well-planned schedule is a fundamental management tool that can help government agencies gauge progress, identify and resolve potential problems, and determine the amount and timing of resource needs. Without a well- planned schedule—developed in consideration of how it will contribute to OPM’s objectives and risks—OPM may not be able to appropriately prioritize and execute the necessary changes. Although use of EHRI payroll data has been limited to date, it carries significant potential to support governmentwide accountability and human resource analytics and decision making. In our review of peer-reviewed journals we identified hundreds of articles on topics that correspond to EHRI payroll data fields. Unfortunately, however, EHRI payroll data will continue to be underutilized until—consistent with its own strategic and open data goals—OPM makes the data available to potential users, as it does other databases within the EHRI system. While data collection and storage is not without cost, EHRI’s centralized, standardized, and comprehensive features offer the promise of efficient, cost effective, and more precise analytics. In preparing the data to make them available, OPM will need to take steps to process and clean them as it does for the EHRI personnel data. This is the first step toward improved reliability. Our basic reliability testing suggests that several key fields in the EHRI payroll data are reasonably complete and contain data within expected ranges— and therefore could have potential to support research on these topics. However, while some fields in the current EHRI payroll data may be sufficiently reliable for certain types of audits and workforce analytics, other fields suffer from reliability issues that limit the range of purposes for which the data can be used. This is because OPM has not designed sufficient control activities to assure data quality, has not evaluated or consistently implemented the control activities it has designed, and has not updated key documentation to support quality submissions of data. Compounding these problems is OPM’s failure to monitor ongoing operations, for example, by reviewing system generated reports. Without timely identification and correction, data quality problems will continue undetected and remain uncorrected. While OPM officials noted their intention to address these shortcomings, they do not have plans with specific actions and time frames for doing so. Without a schedule specifying when these planned changes will be made, OPM officials will be unable to gauge progress, identify and resolve potential problems, or determine the amount and timing of resource needs related to the desired changes. As a result, OPM faces an increased risk of implementing ineffective or contradictory changes, and of facing delays in completing these activities. Until relevant changes are made, existing problems can continue to compound as data for 2 million federal civilian employees are received biweekly. Without available and reliable payroll data, OPM and others must continue to rely on data that are more costly, imprecise, or limited in scope—missing opportunities to leverage centralized, standardized data that is essential for accountability and well-informed management and policy decisions. GAO is making five recommendations to the Director of OPM. GAO recommends that the Director of OPM take the following action to support its strategic and open data goals: Improve the availability of the EHRI payroll data—for example, by preparing the data for analytics, making them available through online tools such as FedScope, and including them among the EHRI data sources on the OPM website and Data.gov. GAO recommends that the Director of OPM take the following two actions to improve internal controls for data quality: Update EHRI payroll database documentation to be consistent with current field definitions and requirements, including the Guide to Human Resources Reporting and the Guide to Data Standards, Part B; and Consistently monitor system-generated error and edit check reports and ensure that timely action is taken to address identified issues. GAO recommends that the Director of OPM take the following two actions to integrate the payroll data into the larger suite of EHRI databases: Develop a schedule for executing these plans; and Evaluate existing internal control activities and develop new control activities for EHRI payroll data, such as implementing transactional edit checks that leverage the information in the other EHRI datasets. We provided a draft of this report for review and comment to the Director of OPM. OPM agreed with our recommendations. In its comments (reproduced in appendix III), OPM noted that a lasting and effective solution for enhancing the quality of payroll data requires consistent data quality not just in the “last mile” after delivery to the EHRI system, but also at the origination of the data. OPM also noted that implementation of these recommendations will require collaboration between various stakeholders and appropriate resources. We agree. As we note in the report, while payroll processing is more consolidated than in the past, agencies still use a variety of time and attendance (TA) systems, which can vary in the level of detail with which work or leave time is recorded depending on agency policies and systems. In addition, there are variations in the systems and processes of the payroll providers. These variations across agencies and across payroll providers underscore the importance of updated documentation for reporting and consistent monitoring of error reports. In addition, through its leadership role in the OPM-managed Human Resources Line of Business, OPM can consider action for ensuring data quality—for example, by including data quality indicators among its performance measures for the payroll providers. OPM also provided technical comments which we incorporated, as appropriate. We will send copies to the appropriate congressional addressees and the Director of the U.S. Office of Personnel Management, as well as to other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2700 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. The names of GAO staff who made key contributions to this report are listed in appendix IV. This Appendix presents selected results from our electronic testing of the EHRI payroll data. The results are grouped into three categories: (1) tests for missing data, (2) tests for logical errors, and (3) tests for potentially invalid values. 1. Missing Data Tests Incomplete data can limit both the ability to conduct desired analyses and the usefulness of any analysis conducted. For example, if a large amount of data is missing, as was the case with data missing for entire agencies for some pay periods, it may not be possible to complete analysis of that agency for the missing periods. If a smaller proportion of data is missing, analysis may still be possible. However, any analysis completed using these data will be limited in its accuracy and validity, which may increase the risk of drawing inappropriate or invalid conclusions. Because the EHRI payroll data have been identified as potentially useful for government-wide studies of telework behavior, we tested for missing data in telework-related fields. As shown in table 6, we found data in these fields to be missing entirely in 2011 and largely incomplete in 2012, years for which reporting on this variable was not required by OPM. The percentage missing is based on the number of records without values out of all records within a fiscal year. As a result, estimates of governmentwide telework participation, as shown in table 7, are likely to be inaccurate for these years. 2. Logical Error Tests Logical testing can reveal data reliability issues among and within individual records. For example, logical testing can assess whether there are duplicates among records in the data system. Our electronic testing assessed whether there were duplicate records in the EHRI payroll data. We also looked for records with multiple payments, either from the same agency or from different agencies, in a single pay period—a form of duplication that could also indicate problems with data reliability. As shown in tables 8, 9, 10, and 11, we found no instances of a complete duplicate records, few instances of EHRI IDs associated with more than one social security number, and that generally less than 1 percent of records were associated with multiple payments in a single pay period. Logical testing can also uncover data reliability issues within individual records. As shown in table 12, we found a number of variables with questionable values, given the values of other variables for the same record. For example, we found instances where the amount of annual leave used was greater than the amount available, and other instances where the data indicate an agency contribution to Federal Employee Group Life Insurance (FEGLI) for an employee who has not made a contribution. In both cases, this should not be possible under typical circumstances, and may indicate a data reliability issue. 3. Outlier and Out of Range Tests Tests for invalid formats and values can reveal obvious errors about data. For example, as shown in table 13, we tested the format of the social security numbers (SSN) in EHRI, which should all be nine digit numbers, and found some cases where these numbers were not properly formatted, indicating a potential data reliability issue that could prevent analysis of individuals when attempting to match SSNs. We also tested the values of fields in the EHRI payroll data to determine whether any records were outside of the expected ranges. As shown in table 14, we found several variables where the minimum value was below the expected possible floor or the maximum value was above the expected ceiling. For example we found a maximum value for Student Loan Repayments of $100,000, which is well above the expected ceiling for such payments. We also found that the lowest minimum value for total salary in a pay period was negative $99,140, while all salary values should generally be positive for normal records (non “correction” records). Data reliability assessments—a process consistent with internal control standards—gather and evaluate the information needed to determine whether data can be used to answer specific research questions. In this context, reliability means that data are reasonably complete and accurate to answer the intended questions that OPM, agencies, policy organizations, and academics might have about the federal workforce. Reliability assessments are specific to the context of the particular characteristics of the research project and the risk associated with the possibility of using insufficiently reliable data. Errors are considered acceptable if the associated risk has been assessed and a conclusion reached that the errors are not substantial enough to cause a reasonable person, aware of the errors, to doubt a finding, conclusion, or recommendation based on the data. To determine whether data are sufficiently reliable for a specific research purpose, one must consider the expected importance of the data in the final report; corroborating evidence; level of risk of using the data; and the results of assessment work conducted to date. Completeness, accuracy, and validity are all components of reliability. Completeness refers to the extent to which relevant records are present and the fields in each record are populated appropriately. For example, are the payroll records for all on-board employees at an agency recorded for every pay period within the calendar year? Accuracy refers to the extent to which recorded data reflect the actual underlying information. For example, do the recorded hours of annual leave in an employee’s payroll record accurately reflect the number of annual leave hours they reported in their time and attendance form? Validity, for the purposes of this report, refers to whether the data actually represent what is being measured. For example, if we are measuring the extent of overtime in the federal government and we use a field that records a certain type of administratively uncontrollable overtime, does that represent the extent of overtime use or might there be other ways overtime is recorded? Data reliability assessments as a process include (1) reviewing existing information about the data and conducting interviews with officials from the entity or entities that collect the data, (2) reviewing selected system controls, and (3) performing tests on the data, such as advanced electronic analysis and tracing to and from source documents. In addition to the contact named above, the following individuals made important contributions to this report: Sidney Schwartz, Director; Rebecca Shea, Assistant Director; Russ Burnett; Steven Putansu; David Blanding, Hiwotte Amare; Joanna Berry; Amy Bowser; Tim Carr; Melinda Cordero; Sara Daleski; Lorraine Ettaro; Dani Greene; Donna Miller; Laura Pacheco; and Jeffrey Schmerling. | OPM is tasked with supporting federal agencies' human capital management activities, which includes ensuring that agencies have the data needed to make staffing and resource decisions to support their missions. The EHRI system is OPM's primary data warehouse to support these efforts. The payroll database—one of the four databases in the EHRI system—became operational in 2009. Payroll data provide information on federal employees' pay and benefits and how they allocate their time, as reflected in hours charged to work activities and use of leave. EHRI data are essential to governmentwide human resource management and evaluation of federal employment policies, practices, and costs. The ability to capitalize on this information is dependent, in part, on the reliability of the collected data. GAO undertook this review to examine the extent to which (1) EHRI payroll data have supported OPM's strategic and open data goals and (2) internal controls are in place to assure the reliability of the data. GAO reviewed literature, interviewed officials and reviewed documents from OPM and the payroll Service Centers, compared OPM's data quality processes to GAO's Standards for Internal Control , and performed electronic tests of the payroll data. The Enterprise Human Resources Integration (EHRI) payroll data are not fully supporting the Office of Personnel Management's (OPM) strategic and open data goals. This is because OPM has not taken the steps necessary to make the data widely available for use by other agencies and researchers. EHRI payroll data are intended to provide a centralized, standardized, and comprehensive source of pay and leave related data across the federal government. In this capacity, these data have the potential to provide a more efficient, cost effective, and precise data source for federal agencies and researchers who wish to assess human resources and policy decision making across the federal government. Because these data are not widely available, federal agencies and researchers must rely on other proxy sources for payroll data, which are more limited in the scope of analysis they can provide or the level of detail needed for data-driven human capital studies. Although some elements of the data are sufficiently reliable for general use, weaknesses in OPM's internal controls for the EHRI payroll data will need to be addressed to enhance the reliability of other data elements. As shown in the table below, GAO's assessment of key internal control activities that are critical to ensuring the reliability of the EHRI payroll data found a number of areas where there is insufficient assurance that the control objective will be achieved. These weaknesses increase the risk of data errors, incomplete data fields, and ineffective monitoring of the EHRI payroll data. Unless OPM takes steps to correct these internal control weaknesses, it will be unable to fully leverage these data to meet its mission and allow others to make full use of these data for their research needs. GAO is making five recommendations, including that OPM improve the availability of its payroll data and implement additional internal control activities to better ensure data reliability. OPM agreed with all of GAO's recommendations. |
The four-seat EA-6B Prowler aircraft conducts missions for all services. The AEA mission is focused on protecting U.S. aircraft and ground forces by disabling enemy electronic capabilities. The EA-6B performs this mission with a complement of electronic receivers and jammers, referred to as its electronic suite, which are located on the aircraft structure and in external pods attached to its wings. A development effort is currently under way to replace the EA-6B with a two-seater electronic attack variant of the F/A-18F, designated the EA-18G Growler. The EA-6B joined the Navy’s fleet in January 1971. The EA-6Bs’s initial deployment was in 1972 over the skies of Southeast Asia. Since the early 1990s, use of the EA-6B has steadily increased. In 1991 the aircraft was used in Operation Desert Storm and in support of Iraqi “no-fly” zones instituted after that war. In 1995, the EA-6B was selected to become the sole tactical radar support jammer for all services after the Air Force decided to retire its fleet of EF-111 aircraft. This decision resulted in increased use of the EA-6B. Since 1995 the Prowler force has provided AEA capability during numerous joint and allied operations against both traditional and nontraditional threats. It was used to provide support for Operation Allied Force in Kosovo and for peacekeeping operations over Bosnia-Herzegovina and Yugoslavia, and is currently being used against traditional and nontraditional target sets in support of ground forces. These capabilities continue to be demonstrated in the Global War on Terrorism, in which EA-6B operations in Afghanistan and Iraq protect coalition forces and disrupt critical communications links. There have been several upgrades to the EA-6B’s electronic suite since it was initially fielded to address increased threats faced by U.S. forces. The standard version, fielded in 1971, was quickly replaced in 1973 with the expanded capability EA-6B, which augmented the electronic countermeasure coverage of the aircraft. In 1977, the Improved Capability version entered service, and was followed by a more sophisticated ICAP II version, first deployed in 1984. The EA-6B/ICAP II featured updated receivers, displays, and software to cover a wider range of known surveillance and surface-to-air missile radars. As a result of heavy use and the limited inventory of the EA-6B, the Joint Chiefs of Staff directed that the inventory of EA-6Bs be managed as low-density/high-demand (LD/HD) assets. Low-density/high demand assets are force elements consisting of major platforms, weapon systems, or personnel that possess unique mission capabilities and are in continual high demand to support worldwide joint military operations. In 1998 an ICAP III upgrade was initiated to address capability gaps against threats from mobile surface-to- air missile systems. In addition, concerns surfaced about an anticipated decline in the EA-6B inventory because of structural fatigue issues. As a result, an AEA analysis-of-alternative was started in 1999 to find a replacement for the EA-6B. At that time it was anticipated that the EA-6B would remain in the inventory until at least 2015. Plans, as recently as December 2001, were to upgrade all 123 EA-6B aircraft in the inventory to the ICAP III configuration. The ICAP III provides rapid emitter detection, identification, geolocation, selective reactive jamming, and full azimuth coverage. Also, ICAP III-equipped EA- 6Bs will have the ability to integrate multiple EA-6Bs to match any threat density, and to control other manned or unmanned assets. The upgrade is needed to address capability gaps in the ICAP II electronic suite presently installed in EA-6B aircraft. The EA-6B ICAP III production line is currently scheduled to shut down after the fiscal year 2006 buy. The AoA report, published in 2002, concluded that an EA-6B replacement would be needed in 2009 to meet the services needs. The AoA further concluded that two components are needed to provide a complete AEA solution that is able to meet DOD’s collective needs. These two components are a recoverable “core” component and an expendable “stand-in” component. The AEA AoA report identified 27 platform combinations that were capable of delivering jamming support. The study concluded that the final AEA solution must address both anticipated short-term platform shortfalls, as well as how best to implement the follow-on capability based on the menu of alternatives developed by the AoA. In addition, the study concluded that before a service can begin a formal acquisition program, the discussion should consider, among other things, whether one service will provide all DOD core component capability, and whether the AEA core component will reside on a single platform. Subsequent to the AoA report, the Navy and the Air Force each decided to develop their own unique aircraft from the 27 platform combinations identified in the AoA to perform the core component of AEA, as shown in figure 2. The Navy opted to develop the EA-18G Growler, a derivative of the F/A-18F, as its core component. The Air Force decided to develop an electronic attack variant of the B-52, designated the EB-52 SOJ (Standoff Jammer), to function as its core component of the AoA solution and an unmanned combat air vehicle and an unmanned decoy as the expendable stand-in components of its AEA AoA solution. The Marine Corps opted to continue using the EA-6B with the ICAP III electronic suite in anticipation of an electronic variant of the Joint Strike Fighter (F-35) being developed as a replacement for its EA-6Bs. The combination of these service AEA solutions is shown below in the DOD AEA system of systems. F/A-18 E/F, F-22, JSF AESA Kinematic Range of Known SAMs As a result of these changes the services have updated a memorandum of agreement that would allow Navy expeditionary EA-6B squadrons to be decommissioned between fiscal years 2009 and 2012, to be replaced by U.S. Air Force electronic attack capability. The Navy’s aircraft would be dedicated to providing carrier-based AEA support to the Navy. The Navy determined that an inventory of 90 aircraft would be needed to support the Navy’s core component requirement. In 2001 it was projected that an inventory of 108 EA-6Bs would be needed if the Navy were to continue to provide AEA mission support to all the services. In February 2006, DOD proposed to terminate two major components of the system of systems: the B-52 Standoff Jammer system and the Joint Unmanned Combat Air System (J-UCAS). The goal of the B-52 SOJ program was to provide long-range jamming of sophisticated enemy air defense radars and communications networks, using high-powered jamming equipment. The Air Force believes that a standoff jamming capability is still required, and it is investigating the solution options, platform numbers, and mix to deliver this capability. As part of the cancellation of the B-52 SOJ, the Air Force is investigating other solution options and platforms to provide the standoff capability, including examining how the B-52 SOJ cancellation affects Navy plans to retire the expeditionary squadrons of EA-6Bs. The goal of the J-UCAS program is to demonstrate the technical feasibility and operational value of a networked system of high-performance and weaponized unmanned air vehicles. The conclusion of the May 2002 AoA report that the EA-6B inventory would be insufficient past 2009 was not based on the Navy’s requirement for 90 aircraft, but on an inventory requirement of 108 aircraft that would meet the needs of all services. The decision to move to a system of systems using multiple aircraft types means the Navy will no longer be required to support all of DOD’s electronic attack requirements. As a result, EA-6B aircraft will be able to meet the Navy’s suppression of enemy air defense needs through at least 2017 and the needs of the Marine Corps through 2025 as long as sufficient numbers of the aircraft are outfitted with ICAP III electronics suites. If the Navy is required to support all services, given the recent Air Force proposal to terminate the EB-52 standoff jammer program, additional EA-6Bs may require the ICAP III upgrade. According to program officials, the EA-6B ICAP III electronic suite upgrade was determined to be operationally effective and suitable in 2005 and has proven to be significantly better than the ICAP II electronic suite that is currently in use on all but a few EA-6Bs. However, while the EA-6B inventory decline has been postponed, the planned number of aircraft that would receive the ICAP III electronic suite upgrade has been significantly reduced, leaving most EA-6Bs with a shortfall in electronic attack capability against some current and future threats. Production of the EA- 6B ICAP III upgrade is scheduled to end after the 2006 buy. Program officials said that DOD’s 2002 decision to move to a system of systems concept has reduced the inventory requirement for the Navy from 108 aircraft to 90 aircraft. The Navy determined that an inventory of 90 aircraft would be needed to support Navy’s core component requirement. An inventory of 108 EA-6Bs would be needed if the Navy were to continue to provide electronic attack mission support to all the services. The memorandum of agreement between the services, in which the EA-6B has been the sole provider of electronic attack since 1996, allows the Navy expeditionary squadrons to be decommissioned between fiscal year 2009 and 2012 and replaced by the U.S. Air Force’s EB-52 standoff jammer. However, the Air Force has recently canceled the EB-52 jammer. As shown in figure 3, the EA-6B inventory levels are now expected to be sufficient to meet the Navy’s requirement for 90 aircraft through at least 2017 and the Marine Corps requirement for 31 aircraft through 2025. Procurement and replacement of 114 wing center sections for the EA-6B, begun in 1998, have been made on 94 aircraft and are ongoing. A few aircraft have received more than one wing center replacement because of heavy use. As a result, program officials identified the fatigue life of the fuselage as the determining factor in projected inventory levels. The official estimated life analysis of the EA-6B was conducted between 1984 and 1988. The aircraft used in that analysis had 1,873 actual flight hours when the test began, and program management believes that factor was not considered in determining the current fuselage life limit. Program management has asked that updated fatigue life charts be developed based on this information. Program management predicts that this will result in an increase in fuselage life to 14,000 hours, as shown in the solid line in figure 3. In addition, according to program officials extended inventory life can be obtained by procuring 32 additional EA-6B wing center sections at an estimated cost of $170 million. This would result in an inventory of over 90 EA-6Bs through 2019. This projected inventory is represented by the dashed line in figure 3. However, according to program officials, Northrop Grumman Corporation will wrap up wing center section production late this summer, and any new wing center section production would have to be placed on order this year to avoid additional startup and production break costs. While the inventory of EA-6Bs is now projected to meet the Navy’s inventory needs through 2017, most of that inventory will be less able to address some current and future threats than recently anticipated. According to program documents, the ICAP II tactical jamming system, currently installed on most EA-6B aircraft, is limited in its ability to conduct numerous critical functions. Its receivers and integrated connectivity are limiting factors in the ICAP II’s ability to detect, locate, and react to threat systems. Threat systems have become more sophisticated and incorporate advanced technology, severely limiting current ICAP II equipped EA-6Bs’ receivers’ ability to detect and identify threats. The ICAP III upgrade, at an estimated cost of $11.7 million per aircraft for the last four upgrades, provides selective reactive jamming capability; accurate emitter geolocation; full azimuth coverage; and a flexible command and control warfare core system that can integrate and coordinate multiple EA-6Bs to match any threat density, as well as the ability to integrate and control other manned or unmanned command and control warfare assets. Program officials project that a lower unit cost could be achieved if higher quantities are procured. Recent operational test and evaluation (OPEVAL) results for the EA-6B equipped with the ICAP III electronic suite have determined it to be operationally effective and suitable. Since these results, Navy operations and training units have flown and observed two EA-6B squadrons upgraded with ICAP III and found the upgrade to be significantly more capable than EA-6B aircraft equipped with the ICAP II electronic suite. According to Navy users who flew the EA-6B with ICAP III during a recent training detachment, the ICAP III system demonstrated a 30 percent increase in jamming effectiveness over the ICAP II. More data on the superior performance of ICAP III relative to the ICAP II system will become available as results from its first deployment, which just recently occurred, develop. Although the ICAP III-equipped EA-6Bs have been found to be significantly more capable, the numbers of aircraft that are funded to receive the ICAP III upgrade has been reduced compared with earlier DOD intentions to fully upgrade all EA-6Bs. Currently 14 EA-6B aircraft have been funded to receive the ICAP III upgrade, because of funding reductions, development test results, and the decision in 2003 to replace the EA-6B with the EA- 18G. According to Navy and Marine Corps requirements officials, fitting only 14 EA-6Bs with ICAP III is not sufficient to allow for the transition to the EA- 18G without leaving them with an airborne electronic attack capability shortfall against some current and future threats. They believe that between 21 (to meet the Navy requirement) and 31 (to meet the Marine Corps requirement) EA-6Bs should be fitted with ICAP III to address this shortfall. However, an analysis provided by the EA-6B program office concluded that 44 ICAP III aircraft would be needed to meet both Navy and Marine Corps requirements. We have not validated the number of aircraft Navy and Marine Corps officials identified as needed. Because of recent decisions affecting Air Force electronic attack near-term capabilities, additional EA-6Bs may be needed if the Navy is tasked to support the electronic attack requirements of all services beyond 2010. However, increasing the number of EA-6Bs with ICAP III will not be an option if ICAP III production ends in 2006 as currently planned. The EA-18G development schedule is aggressive according to program officials and the DOD Director of Operational Test and Evaluation’s 2005 annual report. While the program is currently on cost and schedule according to program officials, our analysis shows that the program is not fully following the knowledge-based approach inherent in best practices and DOD’s acquisition guidance, thus increasing the risk of cost growth and schedule delays. In addition, we have found that most research and development cost growth is reported after a program has passed the critical design review--the acquisition phase the EA-18G recently entered. Over the last several years, we have undertaken a body of work examining weapon system acquisition in terms of lessons learned from best system development practices. Successful programs attain high levels of knowledge in three aspects of a new product or weapon: technology, design, and production. If a program is not attaining high levels of knowledge, it incurs increased risk of problems, with attendant cost growth and schedule delays. The EA-18G airborne electronic attack program entered system development with immature technologies, and some of these technologies are still not mature. Also, while most of the design drawings are complete, it is possible that redesign may be needed in the future as the technologies mature. In addition, the Navy plans to procure a large percentage of the total EA-18G aircraft during low-rate initial production based on limited knowledge of the aircraft’s ability to perform the electronic attack mission. This could result in the need to retrofit already produced EA-18G aircraft, shown in mock-up form in figure 4, a possibility that the Navy is already anticipating. According to program officials, the EA-18G program is currently on cost and schedule. While it held its critical design review in April 2005, it is now in the phase where most research and development cost growth is recognized and reported. We recently reviewed the development cost experience of 29 programs that have completed their product development cycle--the time between the start of development and the start of production. We found a significant portion of the recognized total development cost increases of these programs took place after they were approximately halfway into their product development cycle. These increases typically occurred after the time of the design review of the programs. The programs experienced a cumulative increase in development costs of 28.3 percent throughout their product development. Approximately 8.5 percent of the total development cost growth occurred up until the time of the average critical design review. The remaining 19.7 percent occurred after the average critical design review. Our work shows that the demonstration of technology maturity by the start of system development phase is a key indicator of achieving a match between program resources (knowledge, time, and money) and customer requirements. We recently reported that the cost effect of proceeding into product development without mature technologies can be dramatic. Research, development, and test and evaluation costs for programs that started development with mature technologies increased by an average of 4.8 percent, while those that began with immature technologies increased by an average 34.9 percent. In December 2003, after a truncated concept exploration phase, the EA- 18G was approved to enter system development, in order to achieve a 2009 initial operational capability date directed by the Chief of Naval Operations. Prior to entering system development, the program office assessed the readiness of the EA-18G’s technologies and concluded that the system was not developing or advancing any new technologies and that only proven systems with minor modifications using mature technologies would be utilized. In addition, program officials stated that the EA-18G development benefited from the maturity of the F-18F platform and the airborne electronic attack suite currently flown on the EA-6B. Our assessment of the technology maturity of the EA-18G, however, differs from that offered by program officials. Over the last few years, we have reported on the system’s progress in our annual assessment of selected major defense acquisition programs. We have reported that at the start of system development none of the program’s five critical technologies were fully mature, and as recently as our March 2005 report this had not changed. While they are similar to the mature technologies found on the EA-6B and the F/A-18F, integrating those technologies on the EA-18G involves form and fit challenges. Three of the critical technologies--the ALQ-99 jamming pods, the F/A-18F aircraft, and the tactical terminal system--are approaching full maturity; two other technologies—the communications countermeasure set and the ALQ-218 receiver--are less mature. The Communications Countermeasures Set (CCS) provides communications detection and processing to the EA-18G. Among other things, it is used to degrade the effectiveness of the communications components that make up enemy integrated air defense systems. The existing set used on legacy EA-6Bs is out of production, and a replacement system is needed for use in the EA-18G. The new one is to be composed of new components, and it will function in a new environment. We believe that putting the CCS into the space constraints of the EA-18G platform may be a challenge and thus should be considered a technology risk to the program. The EA-6Bs fitted with ICAP III have a new technologically mature receiver, the ALQ-218, which is housed in the large space on the aircraft’s vertical tail. The ALQ-218 receiver for the EA-18G, however, is being split and redesigned so it can be integrated into the aircraft’s smaller wingtip pods. The wingtip environment is also known to be harsh, with noise and vibration that are known to be particularly severe and can degrade the reliability of receiver components. Isolators will be used in an attempt to lower the vibration levels. Since the ALQ-218 antenna elements will be subject to flexing of the wing that could reduce system performance, accelerometers will be placed in the wingtip pods to measure relative movement between the wingtips so that accurate threat locations can be made. In addition, many subcomponents also include new and modified parts, so the receiver’s performance and delivery schedule are being tracked as risks to the program. Furthermore, the unique ALQ-218 wingtip covers, or radomes, have recently surfaced as potentially problematic. There are technical risks with the radome’s electrical characteristics and environmental specifications--especially its ability to withstand hail strike requirements. The radome is being tracked as a high risk to the program because it may not meet a performance requirement. Flight tests on the EA-18G to measure the impact of noise and vibration on completed components will not start until February 2007. The performance of the ALQ-218 radome will not be known until flight tests that demonstrate its capability are conducted later this year. The maturity of the full ALQ-218 will not be fully known until the EA-18G aircraft completes flight tests with these components during developmental testing scheduled to start in April 2008. The design of the EA-18G appears stable because almost all of its design drawings are complete. However, the order in which knowledge is built throughout product development is important to delivering products on time and within costs. Our past work has shown that knowledge gaps have a cumulative effect. For example, design stability cannot be attained if key technologies are not mature. Until all the EA-18G critical technologies demonstrate maturity, the potential for design changes remains. While the program held its system-level critical design review in April 2005, flight tests will be needed to verify the loads and environment used for some of these designs and determine the maturity of the critical technologies. The EA-18G production decision scheduled for April 2007 will be based on limited demonstrated functionality. The initial capability demonstrated in support of the production decision will be less than that of the ICAP III on the EA-6B. Four EA-18G aircraft will be built to conduct operational tests during the system development and demonstration test phase. The Navy plans to procure an additional one-third, or 30, of the EA-18G aircraft during low-rate initial production (LRIP), at an estimated cost of $2,297.1 million for the two low-rate initial production lots in fiscal year 2007 and fiscal year 2008. This low-rate initial production quantity is significantly higher than the recommended DOD acquisition target of 10 percent. The program does not plan to demonstrate through flight tests a fully functional production representative prototype until testing in April and May of 2008. In addition, program plans call for procuring 56 EA-18G full- rate production (FRP) aircraft to achieve the procurement objective of 90 aircraft. As a result, full funding for 56 of the 90 EA-18G aircraft and 34 of the 56 airborne electronic attack suites will be committed prior to the completion of operational testing and evaluation. This creates a risk, acknowledged by the program office, that redesign and retrofitting may be needed, since it will not be known how effective and suitable the EA-18G will be or what changes are required until after those tests are completed. The EA-18G requirements are to meet, and in some cases exceed, those of the EA-6B ICAP III, adding an air-to-air intercept capability and the ability to communicate while jamming. However, according to program documents the first operational test, scheduled to be completed in February 2007, 2 months before the low-rate initial production decision, will demonstrate a much more limited capability, primarily the ability to radiate a simple, single-source jamming assignment and the ability to receive, identify, and display limited simple emitters. Test results demonstrating full ICAP III equivalent capabilities will not be available until the operational evaluation scheduled to be completed in January 2009, 3 months before the projected full-rate production decision, when the third and final software release will be available for testing. The test plan is driven by software development, and the EA-18G software will be available for testing in three releases, or builds. Software is on the critical path to program completion and will provide the functionality that is available for testing before each production decision. While the program officials responsible for managing the software appear to be tracking all major cost, schedule, and quality markers, software development is still considered a moderate risk. Problems or delays in the initial software releases could affect the start of the operational evaluation. Even before that, the current software development schedule will not allow the program to demonstrate that the EA-18G system can fully function until after the program office has committed to producing all 30 of the low-rate initial production aircraft. Under the current schedule, operational testing of the final software release needed to demonstrate the desired functionality of EA-18G aircraft will not be completed until January 2009 – 3 months before the projected full-rate production decision. Should the Air Force decisions to terminate its EB-52 jammer and Joint- Unmanned Combat Air System programs stand, the airborne electronic attack framework that arose after the 2002 analysis of alternatives will not materialize as planned. These decisions and the emergence of irregular threats place an added burden on the Navy’s EA-6B and EA-18G airborne electronic attack assets and may result in an even larger gap in DOD’s capability. A reduction in plans to upgrade Navy EA-6B with ICAP III electronic suites creates a transition shortfall in capability until the EA-18G becomes operational. Potential delays in the EA-18G development and testing effort would only aggravate this shortfall. The EA-18G development schedule is based on a premise--EA-6B inventory will not be sufficient beyond 2009-- that is no longer valid for assessing the Navy’s future needs. The inventory of EA-6B aircraft is now projected to be sufficient to meet Navy and Marine Corps needs for another decade or longer. In addition, the compressed and aggressive schedule, a direction given to the program office, does not allow decision makers to benefit from the demonstration of knowledge at critical junctures, a proven mitigator of risk. The availability of EA-6B aircraft allows DOD to consider an alternative to its current strategy. After determining how it will fulfill the warfighter’s needs and address capability shortfalls, DOD could outfit additional EA-6B aircraft with upgraded ICAP III electronic suites. This option is made possible by the successful integration of the ICAP III electronic suite with the EA-6B aircraft and structural improvements. However, this would necessitate not closing production of these electronic suites in 2006, as presently planned. To mitigate the effects accruing from the shortfall in upgraded EA-6B aircraft, the risk of delay in the development of the EA-18G, and the proposed cancellation of the EB-52 jammer and the Joint-Unmanned Combat Air System, we recommend that the Secretary of Defense take the following two actions: Determine the number of EA-6Bs equipped with ICAP III electronic suites necessary to deal with the existing and near-term capability gaps. Consider procuring this necessary number of ICAP III upgrades. If DOD implements the option, we recommend that the department continue the EA-6B ICAP III production line after the fiscal year 2006 buy, and restructure its EA-18G low-rate initial production plans so that procurement of the aircraft occurs after the aircraft has demonstrated full functionality. DOD provided us with written comments on a draft of this report. The comments appear in appendix II. DOD partially concurred with our recommendation that the Secretary of Defense determine the necessary number of EA-6Bs equipped with ICAP III electronic suites to deal with the existing and near-term capability gap. DOD agreed that the Navy’s airborne electronic attack inventory needs review and has directed a study of department wide airborne electronic attack forces to be issued on September 15, 2006. However, it is unclear from DOD’s response if the department's review will specifically identify, as we recommended, the necessary number of ICAP III-equipped EA-6Bs needed to address the existing and near-term capability gap. In light of the end of planned ICAP III production this year, DOD needs to identify this specific number, as it is a necessary prerequisite to our second recommendation. DOD also partially concurred with our recommendation that the Secretary of Defense consider procuring the determined number of ICAP III upgrades and that if DOD takes this option, the department (1) continue ICAP III production and (2) restructure the EA-18G low-rate initial production plans so that the procurement of the aircraft occurs after the aircraft has demonstrated full functionality. Regarding the first part of our recommendation, DOD agreed that it should consider procuring the required ICAP III upgrades, as determined by the ongoing airborne electronic attack review, but stated that it is premature to make a decision until the ICAP III inventory levels are determined. We agree that such determination is a prerequisite and have so stated in our first recommendation. However, that determination needs to be completed before the ICAP III production line ends in fiscal year 2006. With regard to the second part of our recommendation, DOD stated that the current EA- 18G low-rate initial production plan provides the best balance of risk and cost to expeditiously meet warfighters’ needs. We remain concerned that producing EA-18G aircraft before testing demonstrates that the design is mature unnecessarily increases the likelihood of design changes that will lead to cost growth, schedule delays, and performance problems. In the past, Congress has raised concerns about the costly outcomes of highly concurrent development and production efforts that are not "flying before buying." Starting production before flight tests demonstrate the full ICAP III equivalent capability works as intended places the $2,297.1 million low- rate initial production investment at significant risk. The procurement of additional ICAP-III-equipped EA-6Bs would allow the time to properly test the EA-18G before making a production decision and reduce the risk of costly retrofitting of the initially produced EA-18Gs. Therefore, we continue to believe that our recommendation should be implemented. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Air Force, and Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or any of your staff have any questions on matters discussed in this report, please contact me on (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were David Best Assistant Director, Jerry Clark, Robert Ackley, Michael Aiken, Judy Lasley, Chris Miller, and Robert Swierczek. To determine if the key conclusion reached in the Department of Defense’s (DOD) May 2002 airborne electronic attack (AEA) analysis of alternatives (AoA)--the projected inventory of EA-6Bs would be insufficient beyond 2009--is still valid, we interviewed officials in the Office of the Secretary of Defense; the Strategic Command (Offutt, Nebraska); the Commander Electronic Attack, Pacific Fleet (Whidbey Island); and officials responsible for Air Force, Navy, and Marine Corps AEA requirements. We interviewed personnel responsible for Improved Capability (ICAP) III electronic warfare testing at the Office of the Director, Operational Test and Evaluation (Washington, D.C.); Commander of Operational Test and Evaluation Navy (Norfolk, Virginia); and VX-9 personnel responsible for ICAP III testing at China Lake, California. In addition to the reviewing 2002 AEA AoA, we reviewed pertinent DOD, service, and contractor documents addressing the status of the EA-6Bs inventory, plans for maintaining the status of EA-6B suppression capabilities, testing conducted for the EA-6B ICAP III program, the AEA system of systems, gaps in the AEA, and potential solutions for AEA. To determine whether the acquisition management approach to the Navy’s airborne electronic attack core component, the EA-18G, is knowledge- based and can help forestall future risks, we reviewed pertinent DOD, service, and contractor documents addressing the status of the EA-18G development effort. We discussed airborne electronic attack issues and EA-18G development and production with contractor personnel at Boeing Corporation in St. Louis, Missouri and El Segundo, California. We discussed software matters with officials at China Lake and Point Mugu, California. We met with pilots at Patuxent River Naval Air Station, China Lake, Whidbey Island Naval Air Station, Fallon Naval Air Station, and Boeing Corporation to discuss pilot workload issues given the transition to the two-seat EA-18G from the four-seat EA-6B. As with our past work on the EA-18G development effort conducted under our annual assessment of selected major defense acquisition programs, we focused our work to determining whether the program was following a knowledge-based acquisition approach. We met with Navy EA-18G program officials currently involved with the development effort to document the maturity status of the aircraft’s critical technologies, the status of its design effort, and plans for producing the aircraft. We performed our review from May 2005 through March 2006 in accordance with generally accepted government auditing standards. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-06-391. Washington, D.C.: March 31, 2006. Military Readiness: DOD Needs to Identify and Address Gaps and Potential Risks in Program Strategies and Funding Priorities for Selected Equipment. GAO-06-141. Washington, D.C.: October 25, 2005. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-05-301. Washington, D.C.: March 31, 2005. Defense Acquisitions: DOD's Revised Policy Emphasizes Best Practices, But More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD's Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Electronic Warfare: Comprehensive Strategy Still Needed for Suppressing Enemy Air Defenses. GAO-03-51. Washington, D.C.: November 25, 2002. Electronic Warfare: Comprehensive Strategy Needed for Suppressing Enemy Air Defenses. GAO-01-28. Washington, D.C.: January 3, 2001. Contingency Operations: Providing Critical Capabilities Poses Challenges. GAO/NSIAD-00-164. Washington, D.C.: July 6, 2000. Combat Air Power: Joint Assessment of Air Superiority Can Be Improved. GAO/NSIAD-97-77. Washington, D.C.: February 26, 1997. Combat Air Power: Funding Priority for Suppression of Enemy Air Defenses May Be Too Low. GAO/NSIAD-96-128. Washington, D.C.: April 10, 1996. Combat Air Power: Joint Mission Assessments Needed Before Making Program and Budget Decisions. GAO/NSIAD-96-177. Washington, D.C.: September 20, 1996. | The EA-6B has conducted airborne electronic attack for all services since 1996. In 2002, the Department of Defense (DOD) completed an analysis of alternatives for the EA-6B that concluded the inventory would be insufficient to meet the DOD's needs beyond 2009. Since then, the services have embarked on separate acquisition efforts to develop airborne electronic attack assets. In 2003, the Navy started development of the EA-18G aircraft to replace the EA-6B. This report was done under the Comptroller General's authority and assesses if (1) DOD's 2002 conclusion that the EA-6B inventory would be insufficient beyond 2009 remains valid for assessing the Navy's future needs, and (2) the acquisition approach used to develop the EA-18G is knowledge-based and might mitigate future risks. EA-6B aircraft will be able to meet the Navy's suppression of enemy air defense needs through at least 2017 and the needs of the Marine Corps through 2025--as long as sufficient numbers of the aircraft are outfitted with upgraded electronics suites. The conclusion that the EA-6B inventory would be insufficient past 2009 was not based on the Navy's requirement for 90 aircraft, but on an inventory requirement of 108 aircraft that would meet the needs of all services. The decision to move to a system of systems using multiple aircraft types means the Navy will no longer be required to support all of DOD's electronic attack requirements. However, insufficient quantities of upgraded jamming systems means that the majority of the EA-6B fleet is equipped with the older jamming system that is limited in its ability to conduct numerous critical functions. If the Navy is required to support all services, given the recent Air Force proposal to terminate its EB-52 standoff jammer program, additional EA-6Bs may require the Improved Capability (ICAP) III upgrade. The risk of cost growth and schedule delays in the EA-18G program is increasing because the program is not following a knowledge-based approach to acquisition. None of its five critical technologies were fully mature as the system development phase began, and that is still the case today. Of particular concern is the ALQ-218 receiver, placed in the harsh wingtip environment on the EA-18G and not the more benign setting of the EA-6B's tail, for which it was developed. While the EA-18G's design appears stable, and almost all its design drawings are complete, that may change once the aircraft is flight-tested. Production of the EA-18G is also risky: One-third of the total buy will be procured as low-rate initial production aircraft based on limited demonstrated functionality. |
For more than a decade, rapid increases in the use of computer technology, both at work and in the home, have changed the way Americans work and communicate. As of September 2001, 174 million people—66 percent of the U.S. population—were using computers in their homes, schools, libraries, and work. In the workplace, 65 million of the 115 million employed adults age 25 and over, almost 57 percent, used a computer at work. However, in recent years, while the increase in the percentage of employees using computers has been modest (52 percent in 1998 to 57 percent in 2001), the percentage using the Internet and/or e-mail at work grew from about 18 percent in 1998 to almost 42 percent in 2001. As the use of these electronic technologies has increased in the workplace, so have employers’ concerns about their employees’ use of company-owned computing systems—e-mail, the Internet, and computer files—for activities other than company business. Likewise, privacy advocates have raised concerns about the potential for employers to infringe upon employees’ right to privacy. In response to these concerns, many employers have developed policies to notify their employees that they monitor use of these systems and to provide guidance to employees about the appropriate uses of the computing technologies. Information on the number of private sector companies that monitor their employees, their monitoring practices, and their effects on employee productivity and morale is very limited. While some of these studies suffer from methodological limitations such as low response rates, taken together they seem to indicate a general trend towards employers’ increased monitoring of their employees. In addition, software developers have made it easier and inexpensive for businesses to monitor their employees by creating software that can, for example, scan e-mail messages for certain words or phrases and/or block inappropriate Internet sites. The Electronic Communications Privacy Act (ECPA) of 1986, which is intended to provide individuals with some privacy protection in their electronic communications, has several exceptions that limit its ability to provide protection in the workplace. For example, the act does not prevent access to electronic communications by system providers, which could include employers who provide the necessary electronic equipment or network to their employees. (See, e.g., U.S. v. McLaren, 957 F. Supp. 215 (M.D. Fla. 1997)). Because the ECPA provides only limited protection to private sector employees, some privacy advocates have called for a new law that would specifically address workplace computer privacy and limit the powers and means of employer monitoring. The most recent federal statute affecting privacy in the workplace is the USA PATRIOT Act, which was enacted in the wake of the September 11, 2001, terrorist attacks. This act expands the federal government’s authority to monitor electronic communications and Internet activities, including e-mail. However, no federal executive agency has general oversight responsibilities for private sector employee-monitoring programs. Many states have statutes that are similar to the ECPA, with greater protection in some cases. Additional protection may be provided through state common law, which is based on judicial precedent rather than legislative enactments. Such decisions, however, have generally given employers substantial leeway in monitoring computer use of their employees. While state common law may recognize the right of an individual to take legal action for an offense known generally as “invasion of privacy,” such actions historically have not provided employees with additional protections. Courts have found that employers’ monitoring of their employees’ electronic transmissions involving e-mail, the Internet, and computer file usage on company-owned equipment is not an invasion of privacy. Invasion of privacy claims against an employer generally require employees to demonstrate, among other things, that they had a “reasonable expectation of privacy” in their communications. Courts have consistently held, however, that privacy rights in such communications do not extend to employees using company-owned computer systems, even in situations where employees have password-protected accounts. All 14 companies we contacted routinely collected and stored employee e-mail messages, information on Internet sites visited, and computer file activity. Eight of these companies reported that they only read or reviewed information on employees’ electronic transmissions once the company determined that a further investigation of employee conduct was warranted. However, 6 of 14 companies told us that they routinely performed additional analyses on the stored information to determine if employees were misusing company computer resources. For example, these companies routinely searched the e-mail message titles, addresses, or contents for proprietary information or offensive language. In general, we found that the companies we studied initiated few investigations of employee computer conduct. Most of the companies that have reviewed information on employees’ electronic transmissions and determined that misuse occurred, reported that penalties ranged from counseling and warnings to termination. All 14 companies collected and retained electronic transmission data as part of their normal business operations, primarily as backup files and to manage their computer resources. Backup files can be quickly restored if a computer system failure occurs, and the company’s operations can continue with as little interruption as possible. However, according to company officials, the information on these backup files was also available as a source of data for reviews of individual employee e-mail messages, Internet use, or computer files. Company officials also said that stored data were used to manage their computer resources. For example, officials at one company told us that they collect e-mail and Internet data to track the systems’ capacity. Another company’s representatives said they use the collected information for troubleshooting and to correct network problems. The 14 companies collected different information for e-mail, Internet use, and computer files. For e-mail messages, officials from the 14 companies reported they generally collect and store all business and personal incoming and outgoing e-mail messages including attachments, addresses, and the date and time the e-mail was sent or received. For the Internet sites visited, generally the information collected included the web address and the date and time the website was used. For computer file activity, all the contents of the files on their network computer systems were backed-up daily. Officials from the 14 companies reported they retained these data for short periods of time. Nine of these companies said that they generally retained these files for 90 days or less, and one company kept its e-mail data for as little as 3 days. Eight of the companies reported that they would only review the employee electronic transmission data they collected if there was an indication of employee misuse of computer resources and the company initiated an investigation. Generally, investigations were initiated by either a complaint submitted to management by a company employee or a “request for information” by management concerning an employee’s conduct. These initiating requests were usually reviewed by a number of company officials, including representatives from Human Resources, General Counsel, or Computer Security prior to the actual retrieval of employee information. Company officials told us that unless they received a request for data, they would not review any of their employees’ electronic transmissions. They added that access to any data collected for an investigation is restricted to a limited number of company officials. Company officials cited several reasons for establishing this reactive approach for reviewing employee electronic transmissions. One company believed it was important to establish an atmosphere of trust and presumed employees would use the system primarily for business purposes. Another company’s officials said that they did not have enough resources to actively monitor their employees’ electronic transmissions. Six of the 14 companies we contacted, in addition to collecting and storing information on employee computer use, performed routine analyses on all employee e-mail or Internet data resulting in the review of selected electronic transmissions. These companies reviewed the electronic transmission information for several reasons. Company officials reported that they needed to protect proprietary information and prevent Internet visits to inappropriate sites. For example, 3 companies reviewed e-mail messages using commercial software that searched for keywords. These companies selected the words to be searched, and a computer file of e-mail messages that matched pre-selected key words would be generated. Company officials routinely reviewed this file to determine if e-mails contained inappropriate material. Other companies reported different strategies to identify employee misuse of computer resources. One company’s computer security office generated a weekly report of the 20 employees who logged on the Internet the most times and listed the sites visited. Officials reviewed this list to determine if inappropriate sites have been visited. A second company reviewed the Internet use of a random sample of 10 to 20 employees each month. This review was intended to identify employees who had visited offensive or inappropriate sites. Employees identified through this process were generally counseled against further misuse. Finally, one company, in 2001, monitored the inappropriate websites employees visited, such as hate, violence, and pornographic, and in 2002, it purchased new software to block these offensive sites. Generally, the companies we reviewed—regardless of whether they routinely reviewed employee computer use or examined individual employee records only to pursue particular complaints—reported that the total number of investigations was very small as a proportion of the number of employees with access to e-mail, the Internet, or computer files. The number of annual investigations ranged from 5 to 137 and represented less than 1 percent of the total domestic employees at these companies. For example, one company with more than 50,000 domestic employees reported 72 e-mail investigations and 48 Internet investigations in calendar year 2001. We found companies most often investigated the alleged misuse of employee e-mail followed by investigations of Internet use. Not surprisingly, the company that routinely reviewed employee Internet use initiated the most investigations on employee Internet conduct— 90 investigations. Investigations of the content of employees’ computer files were the smallest in number, and only one company told us that they had initiated investigations related to them. Only 2 of the 14 companies we interviewed were able to provide data on the types of disciplinary actions taken against employee misuse of computer resources. One company reported that of its 20,000 employees, it terminated 2 employees for inappropriate e-mail use, 2 for Internet misuse, and 1 for computer file violation in 2001. The other company reported that over a 5-year period it had terminated 14 employees for misuse of the Internet. Most of the 14 companies reported various types of actions that could be taken against employees for inappropriate use of computer resources. Four companies told us these actions ranged from informal discussions or formal counseling between the employee and company managers to terminations. Only the most flagrant and repeated violations would result in employee termination. The 14 companies we reviewed all have written policies that included most of the elements recommended in the literature and by experts as critical to a company computer-use policy. There is a general consensus that policies should at least affirm the employer’s right to review employee use of company computer assets, explain how these computer assets should and should not be used, and forewarn employees of penalties for misuse. We also found that all companies disseminated information about these policies through their company handbooks, and 8 discussed their computer-use policies with new employees at the time of hire. In addition, some companies provided annual training to employees on company policies, and others sent employees periodic reminders on appropriate computer conduct. The 14 companies we reviewed had written policies that explained employee responsibilities and company rights regarding the use of company-owned systems. Our discussions with company officials and review of written policies showed that all 14 contain most, if not all, of the policy elements recommended by experts. From our review of the literature and discussions with legal experts, privacy advocates, and business consultants, we identified common elements that should be included in company computer-use policies (see table 1). These experts generally believed that the most important part of a company’s computer- use policy is to inform employees that the tools and information created and accessed from a company’s computer system are the property of the company and that employees should have no “expectation of privacy” on their employers’ systems. Courts have consistently upheld companies’ monitoring practices where the company has a stated policy that employees have no expectation of privacy on company computer systems. The experts also agreed computer-use policies should achieve other company goals, such as stopping release of sensitive information, prohibiting copyright infringement, and making due effort to ensure that employees do not use company computers to create a hostile work environment for others. Finally, according to experts, employees should clearly understand the consequences for violating company computer policies. For example, one company’s computer-use policy states that “violators are subject to disciplinary action up to termination of employment and legal action.” While the experts we interviewed recommended that employers include the above elements so that employees can be informed and acknowledge that they have no expectation of privacy on company-owned systems, some experts recommended additional steps that would help to protect employee privacy. For example, one expert recommended that employee groups participate in the formulation and review of monitoring policies; and another expert recommended that employees have access to any information collected on their electronic transmissions. Furthermore, other experts recommended an alternate policy framework that would preclude employers’ review of employee electronic transmissions except when they have a reasonable independent indication of inappropriate use. From our review of company computer-use policies, including interviews with private sector officials and reviews of written policies, we determined that all 14 companies generally addressed most of the seven key elements of a computer-use policy (see table 2). While we determined that these 14 companies’ computer-use policies generally addressed the key elements, we found that there was variation in the specificity in policy statements. For example, one company’s policy statement regarding “Monitoring Use of Proprietary Assets” stated, “ reserves the right to access and monitor the contents of any system resource utilized at its facilities.” Another company’s policy stated, “the information and communications processed through your account are subject to review, monitoring, and recording at any time without notice or permission.” An official from another company, which only collected and stored employee computer use information and did not routinely review electronic transmissions, told us his company informed employees of its capacity to monitor its property with the more general statement that “data is collected and the company reserves the right to review this data.” Only one company reported that its policy did not include language specifically informing employees that their computer use was subject to review by other people in the company. Representatives from this company told us that their policy does, however, include a statement that employee messages could be accessed and that the company could not ensure their confidentiality. Under “Establishing No Expectation of Privacy” some companies directly inform employees that they should under no circumstances expect privacy. For example, one policy stated, “All users should understand that there is no right or reasonable expectation of privacy in any e-mail messages on the company’s system.” Somewhat less explicit, another policy stated, “Our personal privacy is not protected on these systems, and we shouldn’t expect it to be.” Some companies generally implied the principle of “no expectation of privacy” with statements like, “ reserves the right to audit, access, and inspect electronic communications and data stored or transmitted on its Computer Resources.” Finally, the employers we reviewed also addressed improper uses of computer resources. All company representatives had policies that notified employees about improper uses; and the eight written policies we reviewed contained specific prohibitions on the use of company resources to create or transmit offensive material. Moreover, seven of these policies included some form of the word “harass” under their discussion of prohibited or inappropriate uses of corporate systems, and some also included a form of the word “discriminate.” No two policies addressed this issue in exactly the same terms, but representative statements prohibited behaviors such as “viewing or communicating materials of an obscene, hateful, discriminatory or harassing nature”; “any messages or data that…defames, abuses, harasses or violates the legal rights of others”; and “Accessing, downloading, or posting material that is inappropriate, fraudulent, harassing, embarrassing, profane, obscene, intimidating, defamatory, unethical, abusive, indecent or otherwise unlawful.” Experts recommend that policies include such specific prohibitions in order to limit a company’s liability for workplace lawsuits, and they stress the importance of ensuring that employees understand the company’s definitions of inappropriate use. Both the literature we reviewed and experts we interviewed agreed that establishing company policies on employee computer use is incomplete without strategies to disseminate the information. Experts pointed out that informing employees about these policies not only established the limits of employee expectations about privacy but also allowed them the opportunity to conform their behavior to the circumstances of having limited privacy. Among the 14 companies we contacted, we found multiple and active ways to inform and remind employees about the policies concerning the use of computer systems. Officials at 8 of the companies we reviewed said that at the time of hire, new employees receive training on company policies for using the computer systems. Officials from 5 companies told us they required all employees to participate in an annual review of their computer-use policies, either through an Intranet-based training or over e-mail. Other training techniques company officials described to us included business conduct reviews every 2 years, weekly e-mail reminders of their policies, and a series of videotapes that explain policies to employees. In addition to training programs, 10 companies have daily messages referring to the corporate policies that employees must acknowledge before they are allowed to log in to the systems. None of the companies’ representatives we interviewed said that they had changed any of their computer-use polices or practices as a result of the terrorist attacks on September 11, 2001. Officials from four companies reported that after September 11th, they had been asked by law enforcement agencies to provide information about their employees’ and customers’ use of their e-mail systems and other sources and that they had complied with these requests. But none of the employers we interviewed had increased the amount or type of information they gathered on employees’ use of e-mail, the Internet, or computer files. However, representatives from 10 companies did report increased concern for the security of their computer systems from outside trespassers or viruses entering their systems through e-mail or from imported computer files. Seven company representatives mentioned the Code Red Worm—which appeared around July 2001—and the Nimda virus—entering computer networks on September 18, 2001—as particular examples of the most serious kind of threat they faced and said these events had motivated them to strengthen the virus protection of their systems. Ten of the companies we reviewed told us that they have procedures to screen incoming e-mail messages for viruses, for example, by deleting file attachments with an “exe” extension from all incoming e-mail messages. In early 2002, one company began and another was preparing to use software that searches title lines of incoming e-mail and deletes messages with sex-themed language, simply because the volume of unsolicited e-mail had begun to overwhelm their systems. Such actions reflect the widespread belief among the company officials we interviewed that the worst nuisance and most likely threat to company computer systems comes from outside trespassers with a capacity to paralyze a company’s Internet infrastructure or disrupt business, rather than the company’s own employees. We are sending copies of this report to the Secretary of Labor and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix I. In addition to the individuals named above, Nancy R. Purvine, Eric A. Wenner, Shana Wallace, and Julian P. Klazkin made key contributions to this report. | Over the past decade, there has been a technological revolution in the workplace as businesses have increasingly turned to computer technology the primary tool to communicate, conduct research, and store information. Also during this time, concern has grown among private sector employers that their computer resources may be abused by employees--either by accessing offensive material or jeopardizing the security of proprietary information--and may provide an easy entry point into a company's electronic systems by computer trespassers. As a result, companies have developed "computer conduct" policies and implement strategies to monitor their employees' use of e-mail, the Internet, and computer files. Federal and state laws and judicial decisions have generally given private sector companies wide discretion in their monitoring and review of employee computer transmissions. However, some legal experts believe that these laws should be more protective of employee privacy by limiting what aspects of employee computer use employers may monitor and how they may do so. Following the September 11, 2001, terrorist attacks on the United States, policymakers re-examined many privacy issues as they debated the USA PATRIOT Act, which expands the federal government's authority to monitor electronic communications and Internet activities. GAO reviewed 14 private sector companies' monitoring policies and found that all companies reviewed store their employees' electronic transactions: e-mail messages, information on Internet sites visited, and computer file activity. They collect this information to create duplicate or back-up files in case of system disruption; to manage computer resources such as system capacity to handle routine e-mail and Internet traffic; and to hold employees accountable for company policies. Representatives from all of the companies had policies that contained most of the elements experts agreed should be included in company computer-use polices. None of the companies GAO studied had changed any of their employee computer-use policies or monitoring practices after the September 11 attacks. Most companies did, however, report a growing concern about electronic intrusion into their computer systems from outside trespassers or viruses and had increased their vigilance by strengthening their surveillance of incoming electronic transmissions. |
The population of individuals with limited English proficiency in the United States has grown dramatically in recent years. The 2000 Census shows that the number of people reporting that they do not speak English well or very well grew by 65 percent, from 6.7 million in 1990 to almost 11 million in 2000. The data also show that while growth in the population of individuals with limited English proficiency continues in states along the border, such as California and Texas, it is most rapid in other states. (See fig. 1.) As figure 1 shows, for example, the number of individuals who did not speak English well or very well increased by more than 300 percent between 1990 and 2000 in North Carolina and Georgia, and by more than 200 percent in states such as Nebraska, Arkansas, and South Carolina. In 2000, 14 percent of children age 5 and younger in households below the federal poverty level lived in linguistically isolated households. The two largest sources of federal support for child care and early education are CCDF and Head Start. CCB administers CCDF and the Office of Head Start administers Head Start. Both entities are housed within ACF. CCB provides block grants to states through CCDF to subsidize child care expenses of eligible families. In contrast, the Office of Head Start awards grants for the operation of Head Start programs directly to local public or private organizations, school systems, or Indian tribes. The flow of funds under CCDF and Head Start is shown in figure 2. CCDF is used to subsidize the child care expenses of low-income families with children under age 13 and to improve the overall quality and supply of child care. The goals of the program are to (1) allow each state maximum flexibility in developing child care programs and policies; (2) promote parental choice to empower working parents to make their own decisions on the child care that best suits their family’s needs; (3) encourage states to provide consumer education information to help parents make informed choices about child care; (4) assist states to provide child care to parents trying to achieve independence from public assistance; and (5) assist states in implementing the health, safety, licensing, and registration standards established in state regulations. The parent whose child receives child care assistance may either enroll the child directly with a provider who has a grant or contract from the state for the provision of child care services or receive a certificate to enroll the child with a provider of the parent’s choosing. Parents may choose from any child care legally offered in the state, which could include care provided in child care centers, family child care homes, or by relatives or nonrelatives in the child or provider’s home. CCDF is a combination of discretionary and mandatory funds. In federal fiscal year 2006, CCDF provided about $4.9 billion in federal funds to states and territories. In fiscal year 2004 (the latest year for which data were available), the program served approximately 1.74 million children with federal funding of about $4.7 billion. In addition, federal CCDF funds are supplemented with state contributions, and HHS officials reported that total federal and state expenditures for CCDF amounted to almost $9.4 billion in fiscal year 2004. Congress gave states considerable flexibility in administering and implementing their CCDF programs. States are required to submit biennial plans to CCB describing their CCDF activities. States determine income eligibility thresholds up to a federal maximum of 85 percent of the state median income. In their CCDF plans for federal fiscal years 2004 and 2005, almost all states reported setting lower income eligibility limits, with only 5 states at the federal maximum of 85 percent. Because CCDF is a nonentitlement program—one with limited funding and not necessarily intended to cover all eligible persons—states are not required to provide child care subsidies to all families whose incomes fall below the state-determined eligibility threshold, and states may establish priorities for serving eligible families, such as prioritizing families receiving Temporary Assistance for Needy Families (TANF), in order to support their work efforts. States can augment their CCDF funds with other funding sources, such as TANF, to increase funding available for subsidies. States spent $1.4 billion in federal TANF funds directly on child care in fiscal year 2004. States may also transfer up to 30 percent of their TANF block grants into their CCDF programs. In fiscal year 2004, the latest year for which data were available, $1.9 billion in TANF funds was transferred to CCDF. Funds transferred from TANF to CCDF must be spent in accordance with CCDF rules. This is significant partly because the effect of the child’s or the parent’s citizenship or immigration status on the child’s eligibility differs depending on the program. For example, parents’ immigration status may affect their eligibility for child care assistance under TANF, whereas only the immigration status of the child matters for determination of eligibility for subsidies from CCDF. Although legislation authorizing CCDF did not specify the effect of citizenship or immigration status on program eligibility, HHS’s guidance to state agencies indicated that states should consider only the citizenship and immigration status of the child when determining the child’s eligibility for federal child care assistance. Therefore, children who are citizens or legal residents are eligible for CCDF subsidies regardless of their parents’ citizenship or immigration status. States are also required to dedicate at least 4 percent of their CCDF allotments to activities to provide comprehensive consumer education to parents and to improve the quality and availability of child care. States may use some of this quality set-aside to fund child care resource and referral services that are available in every state and most communities in the United States. These agencies provide information to parents on finding and paying for quality child care, offer training to child care providers, and frequently engage in efforts to analyze and report on child care supply and demand in their communities. Often, resource and referral agencies also manage the CCDF subsidy program or are part of local organizations that administer the subsidy in the community. Head Start offers child development programs to low-income children through age 5 and their families. The overall goal of Head Start is to promote the school readiness and healthy development of young children in low-income families. In addition to providing classroom programs for the children, Head Start grantees provide or arrange for a variety of services, including medical, dental, mental health, nutritional, and social services. Children in families with incomes below the federal poverty level ($20,000 for a family of four in 2006) are eligible for available Head Start programs regardless of their or their parents’ immigration status. Head Start grantees must adhere to certain performance standards, including standards related to providing language access in Head Start programs. The Office of Head Start reviews the performance of Head Start grantees on these standards using a structured guide known as the Program Review Instrument for Systems Monitoring (PRISM). In fiscal year 2005, Head Start was funded at $6.8 billion and served 906,993 children. HHS has responsibility for monitoring grantees’ compliance with program requirements. Through its Office for Civil Rights (OCR), HHS also oversees compliance with Title VI of the Civil Rights Act of 1964, which states that no person shall “on the ground of race, color, or national origin, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any program or activity receiving federal financial assistance.” HHS has issued regulations to recipients of HHS funds on implementing the provisions of Title VI, including requiring an assurance in every application for federal financial assistance that the program will be operated in compliance with all requirements imposed under HHS’s Title VI regulations. Moreover, Executive Order 13166, issued in 2000, required federal agencies to prepare a plan and issue guidance to their funding recipients on providing meaningful access to individuals who, as a result of national origin, are limited in their English proficiency. In August 2003, HHS published revised guidance pursuant to Executive Order 13166. The guidance states that Title VI and its implementing regulations require that grantees take reasonable steps to ensure meaningful access for individuals with limited English proficiency, and the guidance is intended to assist grantees in fulfilling their responsibilities to ensure meaningful access to HHS programs and activities by these individuals. Under the guidance, grant recipients are to determine the extent of their obligation to provide language assistance services by considering four factors: (1) the number or proportion of individuals with limited English proficiency eligible to be served or likely to be encountered by the program or grantee; (2) the frequency with which these individuals come in contact with the program; (3) the nature and importance of the program, activity, or service provided by the program to people’s lives; and (4) the resources available to recipients of federal funds and costs of language assistance. The guidance states that grantees have two main ways to provide language assistance services: oral interpretation, either in person or via telephone, and written translation. Finally, the guidance lays out elements of an effective plan of language assistance for persons with limited English proficiency. Monitoring compliance with Title VI and providing technical assistance are functions of HHS’s OCR. OCR enforces Title VI as it applies to agencies’ responsibilities to ensure access for individuals with limited English proficiency. The mechanisms available to OCR for ensuring that agencies comply with their obligations to provide access include complaint investigations, compliance reviews, efforts to secure voluntary compliance, and technical assistance. The most recent national survey data showed that in 1998 children of parents with limited English proficiency, 88 percent of whom were Hispanic, were less likely than other children to receive financial assistance from a social service or welfare agency for child care or to participate in Head Start in the year before kindergarten, after controlling for selected individual and family characteristics. However, these data could not be used to assess their likelihood of enrollment in CCDF programs because the survey questions did not ask for the specific agency providing financial assistance. Further, CCB did not have information on the total enrollment in CCDF programs of children of parents with limited English proficiency because it did not require states to collect and report any language data from parents of children receiving federal subsidies, such as their primary language or English proficiency. The Office of Head Start collected some data on the language spoken by Head Start participants, which showed that about 13 percent of parents of the approximately 900,000 children enrolled in Head Start in 2003 reported speaking English “not well” or “not at all.” National survey data from ECLS-K showed that in 1998, kindergarten children of parents with limited English proficiency who were in nonparental child care in the previous year were less likely than other children in child care to receive financial assistance from a social service or welfare agency for that care, after controlling for selected individual and family characteristics. However, parents’ limited English proficiency had a different effect for Hispanics than for Asians in the dataset. Specifically, as shown in figure 3, Hispanic children of parents with limited English proficiency (who represented 88 percent of all children in the dataset whose parents had limited English proficiency) were less likely than children of Hispanic parents proficient in English to receive financial assistance for their care. Among Asians, who constituted about 8 percent of all children of parents with limited English proficiency, we did not find a statistically significant difference in the receipt of financial assistance for child care between children of parents with limited English proficiency and other children. These results, however, cannot be used to draw conclusions about enrollment in CCDF programs by children of parents with limited English proficiency because the survey questions referred to assistance from a social service or welfare agency generally and did not ask specifically whether assistance came from CCDF. Also, while ECLS-K data are representative of the experiences of children in the year prior to entering kindergarten, they cannot be extrapolated to children of all ages. (See app. II for discussion of the methodology we used to analyze ECLS-K data and the results of our analyses.) Our analysis of ECLS-K data also indicated that after controlling for selected individual and family characteristics, children of parents with limited English proficiency were less likely to participate in Head Start in the year before kindergarten. Again, this result did not hold consistently across racial and ethnic groups. Specifically, as shown in figure 3, children of Hispanic parents with limited English proficiency were less likely than children of Hispanic parents proficient in English to participate in Head Start in the year before kindergarten. In contrast, children of Asian parents with limited English proficiency were more likely than children of Asian parents proficient in English to participate in Head Start. While 1998 ECLS-K data showed that children of parents with limited English proficiency were less likely than other children to receive financial assistance for child care and to participate in Head Start in the year before kindergarten, it cannot be concluded from these data alone that the differences are due to language barriers in access to programs. Other factors, such as the availability of child care and early education programs in the areas in which members of different language groups reside or access to support networks that provide information about available programs may also explain this result. In addition, since the time of the survey, HHS has taken steps to increase the participation of minorities and children of parents of parents with limited English proficiency, such as translating CCDF program brochures and undertaking initiatives to raise awareness of the Head Start program in the Spanish-speaking community. Furthermore, HHS officials reported substantial increases in federal and state child care funding since ECLS-K data had been collected, suggesting that these increases may have increased program access for parents of children with limited English proficiency. However, neither CCB nor the Office of Head Start has more recent information on whether children whose parents had limited English proficiency are more likely to access financial assistance for child care and Head Start relative to children whose parents are proficient in English. ECLS-K was the most recent national dataset that allowed us to examine the receipt of financial assistance for child care and the participation in Head Start by children of parents with limited English proficiency in relation to the participation of similar children whose parents are proficient in English. While CCB requires that states submit a variety of demographic information in monthly or quarterly reports, such as information on the race and ethnicity of CCDF subsidy recipients, it collects no information on the language spoken by or the English proficiency of parents whose children receive CCDF subsidies. CCB officials told us that they had no plans to collect language data for those receiving CCDF subsidies because they generally collect only information specifically listed in the legislation authorizing CCDF. A CCB official with responsibility for the demographic data collected from states and officials from 1 state we visited told us that requiring states to provide language data would create difficulties for states, such as developing ways to identify individuals with limited English proficiency. Despite the potential difficulties, various state and local officials in states that do not collect this information, including the official who cited potential difficulties collecting the data, told us that having such data would help them evaluate program performance. While data on the receipt of CCDF subsidies were not available nationally, 13 states collected some language data from parents whose children receive CCDF subsidies. The specific type of data collected and the manner in which these data were collected varied among these 13 states, preventing comparisons among them on the extent to which state CCDF programs were serving children of parents with limited English proficiency. Officials in 10 of the 13 states that collected language data told us that their states used the data either to provide translated forms or interpreters to clients during the application process or for planning or program evaluation purposes, such as identifying areas with significant increases in the number of individuals with limited English proficiency and to determine the need for bilingual staff. State data, however, had limitations that decreased their usefulness in assessing participation in CCDF programs by children of parents with limited English proficiency. For example, 5 states made the collection of language data by caseworkers optional, and officials in another 5 states told us that despite requiring caseworkers to collect the language data, compliance with the data requirements could not always be guaranteed. Officials in 8 of the 13 states that collected language data told us that they could benefit from having more information on the collection or use of language data or from learning how other states collect or use them. The Office of Head Start collected some language data from the approximately 900,000 children enrolled in Head Start and their parents from two sources. First, the Office of Head Start interviewed parents through its Family and Child Experiences Survey (FACES), a series of longitudinal surveys of nationally representative samples of children in Head Start. Based on the 2003 parent interviews administered, FACES data showed that about 20 percent of parents of 3- and 4-year-old children in Head Start reported that a language other than English was most frequently spoken at home, and about 13 percent of parents reported that they spoke English “not well” or “not at all.” Second, the Office of Head Start collected demographic information on all 4- and 5-year-old children in Head Start through its National Reporting System (HSNRS), including information on the child’s primary language. These data showed that about one-quarter of children enrolled in Head Start in Spring 2005 had a primary language other than English. Results from our focus groups, which were composed of mothers with limited English proficiency whose children were eligible for federal child care subsidies, revealed that some participants were unaware of the various federal child care and early education programs that may be available to them. Parents with limited English proficiency also faced challenges in the process of applying for programs and financial assistance, such as lack of interpreters and translated materials. They also encountered difficulties communicating with English-speaking child care providers. Some of the challenges to program access that these parents faced were the same challenges that many low-income families face, including difficulty finding care at nontraditional hours, lack of transportation, and the limited availability of subsidized child care slots. Many parents with limited English proficiency were unaware of child care assistance available to them. All six of the focus groups with Spanish- speaking and Vietnamese-speaking mothers who were eligible but not receiving subsidies revealed that the majority were unaware of the assistance available. In addition, the mothers that we interviewed in Arkansas and focus group participants in North Carolina also told of misunderstandings and myths that some parents had regarding the consequences of participating in government-funded programs. For example, they had heard rumors that if they applied for child care assistance, their child might one day be drafted into the armed forces to repay the assistance they received. Shortages of bilingual staff also presented challenges to parents with limited English proficiency applying for subsidies for their children. State and local officials and providers that we interviewed identified the availability of bilingual staff as a factor that played a role in the ability of parents with limited English proficiency to apply for the subsidies. For example, subsidy administration officials in one rural county told us that they sometimes had to ask clients to come back because no staff were available to assist them in their language. In three of the four focus groups with Spanish-speaking mothers with subsidies, those who generally found the subsidy application process to be easy cited the availability of bilingual case workers as a factor in allowing them to apply for assistance successfully. In addition to shortages of bilingual staff, the lack of available translated materials also presented challenges to parents with limited English proficiency. Some programs did not have application forms translated into other languages, and local officials and parents expressed concerns about the quality of existing translated materials, saying that they were often translated by volunteers and that no quality checks were done. For example, one community group representative told us that volunteers had translated the Spanish forms that the local subsidy administration office used and that no quality controls had been applied, resulting in materials of such poor quality that she advised parents not to request the Spanish version of the application. These challenges may be more acute for individuals with limited English proficiency who speak languages other than Spanish. Local officials in three states reported that there were limited services available in languages other than Spanish. For example, local officials in Washington said that services to smaller, more diverse populations, such as African, Asian, and East Indian language speakers, were more limited. In North Carolina and California, local officials also reported that services for populations such as the Hmong were more limited than for English or Spanish speakers. Finally, although immigration status has no impact on Head Start eligibility and only the immigration status of the child is relevant to the determination of eligibility for CCDF subsidies, it nonetheless created indirect challenges for some children of parents with limited English proficiency. Local officials and community advocates told us that citizen children of parents with limited English proficiency might not participate in federal child care and early education programs because of fear within the family of exposing undocumented immigrant members in the household. Several officials told us that some of these families were reluctant to provide personal information and were inhibited from applying because of fear about how their personal information might be used. In one case, we discovered a state that improperly required a declaration of satisfactory immigration status for every member of the household in order to apply for federally funded child care subsidies, thereby potentially excluding some children who are U.S. citizens and otherwise eligible for subsidies. Officials in two states also told us that many parents with limited English proficiency were paid in cash, making it difficult to verify their income for eligibility purposes. Parents reported difficulties communicating with their children’s providers, and officials reported shortages of providers with the language ability to serve families with limited English proficiency. For example, officials at one local resource and referral agency that we visited in the county with the most Spanish speakers in the state told us that providers in the county did not have the capacity to meet the needs of families with limited English proficiency. Spanish-speaking mothers that we interviewed during a site visit to another state complained that some programs advertise themselves as bilingual when in reality they are not. Parents in focus groups also expressed concern about their ability to communicate with their child care providers. Local officials in one urban area that we visited said that among the primary challenges faced by families with limited English proficiency was the effect of the language barrier on the parents’ ability to communicate with their child care providers. They stated this also made it difficult to ensure the same level of parent- provider interaction for families with limited English proficiency as for other families. For example, one provider with no bilingual staff said that she had a child with a disability in her center whose parents were limited in their English proficiency, making it difficult for staff to communicate with the parents about the child’s needs. These communication difficulties had consequences in the classroom as well. For example, one Head Start provider reported instances of therapists and educators who were not trained to work with Hispanic families inaccurately assessing the needs of children with language or cultural differences. Low-income parents with limited English proficiency faced some of the same challenges when attempting to access child care and early childhood education programs as other low-income families. Across all states visited, state and local officials as well as providers said that many low-income families, especially families with limited English proficiency, work nontraditional hours and have difficulty finding care that meets their needs. For example, a resource and referral agency official in one rural community said that the first shift at a local employer begins at 5:30 a.m., while most providers do not offer care before 6:00 a.m., and employees working second and third shifts face even more difficulty finding child care. Lack of transportation, especially in rural communities, also restricts the child care options available to low-income families. Officials said that it can be especially difficult for families with limited English proficiency to navigate public transportation or call transit agencies for assistance. In a previous report, we found that lack of English skills reduced individuals’ ability to access public transportation systems. Parents in some communities also faced shortages of child care and child care subsidies, especially for infants and toddlers. Officials with resource and referral agencies and local subsidy administration offices in 6 of the 11 counties that we contacted said that there were shortages of infant care in their communities. In addition, because funding for CCDF subsidies was limited, not all states provided subsidies to all families who applied and met eligibility criteria. Our prior work showed that 20 states did not serve all families who met state-determined eligibility criteria, and three of the five states that we visited (Arkansas, California, and North Carolina) had waiting lists for CCDF subsidies. In five of the eight focus groups with Spanish-speaking mothers (including both those receiving and not receiving subsidies), participants identified waiting lists as one of the difficulties they faced when seeking assistance for child care. In the two other states that we visited (Illinois and Washington), state officials said that although they did not maintain waiting lists, they spent all of the funds available to them for CCDF subsidies. To manage demand for the limited financial assistance available for child care, states took steps such as giving priority to certain groups. For example, in the three states we visited that maintained waiting lists, two (Arkansas and North Carolina) set priorities for eligible families, such as preferences for families on or coming off of TANF. In the third, California, families on or transitioning off of TANF were provided child care assistance through a guaranteed funding stream, while funding for other low-income families was capped. Officials in California told us that this system made it extremely difficult for low-income families that were not in the TANF system to receive subsidized child care. While prioritization of TANF families would affect all low-income families, it may have additional implications for some children of parents with limited English proficiency. Census 2000 data show that 82 percent of individuals with limited English proficiency are foreign-born, and since immigration status is a factor in TANF eligibility, children of immigrants who do not qualify for TANF would be less likely to receive CCDF subsidies in those states that give priority to TANF families. In 2005, we found that 17 of 20 states not covering all applicants who otherwise met the eligibility criteria gave TANF families priority for CCDF funds, consistent with CCDF’s goal of providing child care to parents trying to become independent of public assistance. The majority of state and local agencies and providers that we visited took some steps to assist parents with limited English proficiency in accessing child care and early education programs for their children. Most agencies provided some oral and written language assistance, although the scope of the assistance varied. Most agencies also implemented initiatives to increase the supply of providers able to communicate effectively with parents. Officials told us that they faced several challenges in providing services to parents with limited English proficiency. Some state and local officials indicated that additional information on cost-effective strategies used by others to serve this population would facilitate their efforts to provide access. The majority of the agencies that we visited had taken some steps to provide oral and written language assistance, such as interpreters and translated materials, to parents with limited English proficiency. In all 11 counties that we contacted, the local offices administering CCDF subsidies and providing resource and referral services offered some oral language assistance to clients with limited English proficiency although the scope of the assistance varied. In 5 of these counties, agencies had staff that could speak several languages, a fact that officials said reflected the community they served. In the other 6 counties, agency staff had bilingual capacity for Spanish only, but officials said the vast majority of the individuals with limited English proficiency they served were Spanish- speaking. Although the subsidy administration office in one of these 6 counties had bilingual Spanish-speaking staff, these staff were not specifically assigned to work with individuals applying for CCDF subsidies but were clerical workers with other responsibilities. In most counties visited, child care and Head Start centers had bilingual staff to help parents with limited English proficiency enroll their children in the programs. For example, an official in one child care center that we visited where the majority of the families spoke Spanish said that all staff responsible for enrolling families in the program spoke Spanish. Several agencies that we visited also used telephone interpretation services to provide oral assistance to clients with limited English proficiency. For example, the subsidy administration offices that we visited in Washington primarily used a state-contracted telephone language line that connected agency staff with bilingual telephone operators who could offer interpreting assistance in a language spoken by the client. In an effort to help local agencies serve clients with limited English proficiency in a cost-effective manner, North Carolina was in the process of entering into a contract for a language line that would allow local social service agencies, including those administering CCDF subsidies, to provide oral language assistance to clients if bilingual staff were not available on-site. A state official told us that once the contract is awarded, the state will make the service available to all local social service agencies at a reduced cost. Several agencies also coordinated with one another to share resources for offering oral language assistance. For example, to help interpret for their Russian-speaking clients, a resource and referral agency in California with language capacity in Cantonese and Mandarin coordinated with staff at another nearby resource and referral agency that had language capacity in Russian. Subsidy administration officials in one rural county that we visited told us that the local hospital had a contract for the language line and they coordinated with the hospital to make use of that service. However, we did not find efforts to coordinate language assistance strategies among agencies in some locations visited, and agency officials in a few locations said that they could not always provide oral language assistance to clients with limited English proficiency on their own. The majority of agencies that we visited provided written language assistance, such as translated subsidy application forms. Seven of the 11 subsidy administration offices contacted had subsidy applications translated into Spanish. Local agencies in Washington, California, and Illinois had applications that had been translated by the state. Washington required its application for the child care subsidy to be translated into eight languages, while California and Illinois made applications available in Spanish and gave local agencies the option of translating materials into other languages. Arkansas and North Carolina had no translated applications at the time of our visits, although officials in North Carolina said that the state was in the process of translating the subsidy application into Spanish. All of the resource and referral agencies that we visited translated materials into Spanish, such as brochures containing information on how to receive child care assistance and what to look for when choosing a provider. A few resource and referral agencies also made efforts to translate written information into other languages. For example, as shown in figure 4, one agency translated a brochure on child care quality into Chinese. However, some state and local officials told us that their offices lacked the resources to translate materials into other languages. The majority of local agency officials and providers that we interviewed told us that they relied on agency staff and volunteers to translate materials. For example, officials from a Head Start program told us that their staff had translated materials about the program into Spanish, Hmong, and Laotian. Officials at another Head Start program told us that they relied on bilingual staff, parents of children enrolled in the program, and Spanish-speaking volunteers from the community health clinic to translate the materials. Some agency officials told us that they also used outside contractors or other resources, such as commercially available translation software, to translate materials. Community group representatives expressed concerns about the quality of translations done by the local agencies, particularly in instances when volunteers or translation software had been used. Most local agencies and providers that we interviewed said that they disseminated translated information to raise awareness of their programs and services among parents with limited English proficiency. Agencies and providers employed various mechanisms to disseminate information, including using print and radio media and direct distribution of informational materials in the communities where many families with limited English proficiency reside. For example, some resource and referral agencies and providers said that they advertised their programs and services on Spanish-language television and radio stations, and a few agencies had placed advertisements in the Yellow Pages. Most of them also reported distributing information in various locations in the community, such as churches, neighborhood markets, and laundromats. Despite these agencies’ various outreach efforts, mothers in focus groups, many of whom were unaware of the available assistance, said that there was a need for greater information dissemination in their communities. Spanish- and Vietnamese-speaking mothers in all 12 focus groups indicated that disseminating information in their language would help them learn about child care assistance and child care and early education programs for their children. At the same time, focus groups with Spanish- speaking mothers in California who were already receiving the subsidies revealed their ambivalence about increased advertising of certain child care programs because some of these programs already had waiting lists. Some state and local officials also acknowledged that they did little or no advertising because their programs were already operating at full capacity or had substantial waiting lists. Agencies in the majority of locations that we visited had initiatives to increase the supply of providers who spoke other languages or to offer training in other languages to existing providers. Some agencies had come up with initiatives that focused on helping individuals speaking other languages to enter the child care field. For example, one resource and referral agency that we visited offered the classes required for obtaining a child care license in Spanish, and another one offered them in Cantonese. A resource and referral agency that we visited in an urban county developed a program to help Somali- and Russian-speaking women in the community obtain the training necessary to become licensed family child care home providers. In four of the five states that we visited, officials told us that selected community colleges participated in efforts to increase provider capacity to serve children of parents with limited English proficiency. For example, a community college in Illinois offered early childhood education classes in Spanish, while a community college in California coordinated with a local resource and referral agency to offer these classes in Cantonese. However, some officials said that such efforts were insufficient, and in one state visited, an official from a university early childhood education program said that she was not aware of any efforts in the state to offer classes in other languages. Many agencies that we visited also provided training to existing child care providers who had limited English proficiency. For example, local referral agencies in Illinois included bilingual individuals in the technical assistance teams available to assist providers in improving the quality of care. Three of the five states that we visited used CCDF quality funds for various provider initiatives related to language, such as offering training to providers on working with families that had limited English proficiency or translating materials into other languages. For example, Arkansas used quality funds for training and technical assistance to help providers understand cultural issues that families with limited English proficiency face. California used these funds to offer training to providers throughout the state on working with children who speak other languages. Officials in North Carolina said that while they did not have any projects funded with CCDF quality funds that directly related to serving children of parents with limited English proficiency, they had used some of the funds to translate materials on child care health and safety practices into Spanish. Two of the states visited—Washington and Illinois—did not use CCDF funds directly on initiatives related to serving children of parents with limited English proficiency or providers working with them. However, both states used the funds to support other initiatives, such as the work of resource and referral agencies, which included outreach to parents with limited English proficiency in some of their efforts. State and local officials told us that despite efforts made, there was a shortage in some locations of training opportunities for providers who speak other languages. For example, officials across states and counties that we visited cited examples of child care providers with limited English proficiency who had attended training, such as training required for licensing, although they could not fully understand the course content because it was not available in their languages. An official we interviewed told us that this could affect the quality of child care they would offer to children because the training covered critical issues, such as health and safety procedures. State and local agency officials, providers, and community college representatives reported several challenges associated with providing oral language assistance to parents with limited English proficiency applying for child care and early education programs for their children. Officials told us they faced challenges providing oral language assistance because of the difficulties that agencies had hiring qualified bilingual staff. Even when qualified bilingual individuals were found, officials said that these individuals were in very high demand and agencies could not always compete with other organizations interested in hiring them. For example, some child care and Head Start providers told us that they are losing qualified bilingual staff to school districts that offer higher salaries. Rural areas especially experienced difficulties hiring bilingual staff because their pool of qualified candidates was smaller than in the cities or virtually nonexistent. A few officials said that the lack of reliable transportation in rural areas makes it difficult to recruit staff from the cities. For example, a resource and referral agency official in one rural area that we visited told us that her office’s bilingual staff had quit because they had difficulty getting to work. Officials also cited difficulties with finding professional interpreters and with the expense associated with hiring them when agencies lacked bilingual staff of their own to offer oral language assistance to clients. Agency officials also reported challenges providing written language assistance to parents with limited English proficiency. They said that translating materials into other languages was expensive, particularly for agencies that served clients from several different language groups and had to translate materials into multiple languages. Local agencies frequently relied on their own staff to translate the materials, but a few officials said that this posed a burden on staff with other full-time responsibilities. At the same time, state and local officials said that contracting out for translations was expensive. Although state officials acknowledged the expense associated with translating materials into other languages, some states left local agencies to shoulder the burden of translating documents on their own. For example, state officials in California told us that the expense prevented the state from translating applications into languages other than Spanish, but local agencies had absorbed the cost of translating applications themselves in order to meet the needs of program applicants who spoke other languages. In addition, officials said that providing language assistance or training in other languages was not always cost-effective because of the relatively small number of individuals that would benefit from such efforts. For example, one resource and referral agency official told us that the cost of ordering materials in Spanish was higher than the cost of ordering the same materials in English because the materials had to be purchased in smaller orders, thereby increasing their cost. Some officials said that while they were able to offer language assistance to larger language groups in the area, such as Spanish speakers, they chose not to expand their assistance to include other language groups because of the small number of individuals that would benefit from it. Despite challenges faced, agency officials that we interviewed expressed the need for effective and affordable ways to provide services to individuals with limited English proficiency. Officials in three states visited told us that they would benefit from having additional information on cost- effective strategies to serve parents with limited English proficiency. Several officials also told us that it would be helpful for them to learn more about the professional development opportunities for providers offered at other locations. For example, officials in Illinois said that the state’s current capacity for provider training in Chinese was limited and that they would like to learn more about any curricula developed in other states with larger Asian populations. HHS issued general guidance, translated materials, and provided technical assistance to grantees on serving children of parents with limited English proficiency, but gaps remain in its program review efforts. The Office of Head Start has provided assistance to increase awareness of the Head Start program among families with limited English proficiency and has monitored local programs’ efforts to provide access to these families by reviewing grantees’ biennial assessments of need in the communities they serve and by conducting formal monitoring reviews of grantees. However, an Office of Head Start official told us that the office could not ensure that its review teams consistently reviewed grantee compliance with program standards related to language access, and in our prior work we found that no mechanism existed to ensure consistency in the monitoring process. CCB provided assistance to help programs serve children whose parents have limited English proficiency, as well as reviewed states’ CCDF plans and investigated complaints. However, CCB had no mechanism for reviewing how access to CCDF subsidies was provided for children of parents with limited English proficiency or for ensuring that these children were not inadvertently excluded from the subsidies as a result of state eligibility criteria that were inconsistent with CCB’s program eligibility guidance. In 2003, consistent with Executive Order 13166, HHS issued guidance to federal financial assistance recipients regarding the Title VI prohibition against national origin discrimination as it affects individuals with limited English proficiency. The guidance was intended to help recipients of HHS funds, such as agencies administering CCDF subsidies and Head Start programs, provide meaningful access for individuals with limited English proficiency. The guidance, however, applied to all HHS programs and did not refer specifically to child care or early education. HHS’ OCR provided outreach to potential beneficiaries of HHS programs and offered technical assistance to grantees to help them comply with the guidance. For example, OCR officials told us that they disseminated information about serving individuals with limited English proficiency at Hispanic health fairs, through recorded public service announcements and interviews on Spanish-language media, and by giving presentations before community service organizations. They also said that they provided grantees with technical assistance in identifying appropriate language access strategies. Regional OCR officials told us that their offices served as a resource for local social service agencies, directing them to less costly language access strategies, such as sharing interpreter services, and providing information on available resources and practices. OCR also participated in the Federal Interagency Working Group on Limited English Proficiency that developed, among other things, a Web site devoted to serving persons with limited English proficiency (www.lep.gov). The Web site serves as a clearinghouse, providing information, tools, and technical assistance regarding limited English proficiency and language services for federal agencies, recipients of federal funds, users of federally assisted programs, and other interested parties. It makes available a range of guidance and information on offering language assistance through mechanisms such as interpreter services and translated materials for clients with limited English proficiency in the areas of health care, the courts, and transportation. However, it does not include specific information on providing language assistance in child care and early education programs. In addition, CCB and Office of Head Start officials and officials from several HHS regional offices told us that they were unaware of the Web site. OCR is required to investigate all complaints of alleged discrimination, including lack of access to programs for individuals with limited English proficiency. OCR officials told us that Title VI violations in child care were rare. They said that when infractions do occur, they try to reach a voluntary compliance agreement with the state and conduct follow-up to ensure that the state takes corrective action to comply with the terms of the agreement. For example, North Carolina entered into a voluntary compliance agreement with OCR and implemented a corrective action plan for providing access for program applicants with limited English proficiency. A state official told us that the state was in the process of translating the subsidy application into Spanish as a result of this agreement. The Office of Head Start has provided a variety of assistance to increase awareness of the Head Start program among families with limited English proficiency. The office has twice hosted a National Head Start Hispanic Institute, the goals of which included improving outreach to Hispanic communities, developing methods to effectively serve Hispanic children and families, and helping ensure positive outcomes in language and literacy development for English-language learners. A Head Start official told us that the needs of other language groups needed to be addressed as well, and that the Office of Head Start was considering how to replicate the institute for groups that speak other languages. According to officials, the Office of Head Start has several other initiatives to reach parents with limited English proficiency, such as placing public service announcements on Spanish-language media and distributing a brochure in Spanish informing families potentially eligible for Head Start of the benefits of enrolling their children in Head Start. The Office of Head Start has also provided assistance to grantees to better serve children of parents with limited English proficiency. Recently, the office conducted a national language needs assessment of second language and dual language acquisition to identify culturally responsive, research-based strategies to improve outcomes for children and families. It also developed a Culturally Responsive and Aware Dual Language Education (CRADLE) training initiative that is designed to support grantees in their efforts to find best practices for language acquisition for the birth-to-3-year-old population. In addition, through its English Language Learners Focus Group, the Office of Head Start created materials for grantees working with second language learners, including Spanish speakers who constitute the majority of children in Head Start whose parents have limited English proficiency. The Office of Head Start monitors grantees’ efforts to provide access for individuals with limited English proficiency by reviewing their biennial community assessments and conducting formal on-site monitoring reviews. Head Start programs are required to conduct a community assessment at least once every 3 years, and the Office of Head Start regional officials review these assessments for demographic disparities between program participants and the population of the community to be served. For example, programs with assessments showing large numbers or proportions of language groups in the community that are not reflected in the enrollees or the classroom teachers may be found out of compliance with meeting local needs. Head Start programs are also monitored by the Office of Head Start once every 3 years through the PRISM process. Head Start programs are required to adhere to program performance standards that define the services that programs are to provide to children and their families, and on-site PRISM review teams monitor Head Start grantees’ adherence to the standards. Several of the standards directly address interactions with children and parents with limited English proficiency. For example, one performance standard requires communications with parents to be carried out in the parent’s primary or preferred language or through an interpreter. Another performance standard directs programs in which the majority of children speak the same language to have at least one classroom staff member or home visitor who speaks that language. The contractor responsible for assigning bilingual reviewers to PRISM review teams told us that about 17 percent of reviewers were bilingual and that review teams requesting a Spanish-speaking bilingual individual had one assigned 70 percent of the time. A Head Start official with responsibility for the PRISM process told us that given the vast number of regulations, however, it was impossible to ensure that all of them were consistently reviewed in the course of a 1-week review. In our previous work, we reported that ACF had no process in place to ensure that its reviewers consistently followed the standards while conducting on-site PRISM reviews. We recommended that ACF develop an approach that can be applied uniformly across all of its regional offices to assess the results of the PRISM reviews and implement a quality assurance process to ensure that the framework for conducting on-site reviews was implemented as designed. HHS agreed with our recommendation, and Head Start officials indicated that the Office of Head Start was developing new PRISM protocols and training reviewers to add more uniformity to how grantees are assessed. In addition, the Office of Head Start recently announced plans to conduct follow-up reviews of grantees monitored through the PRISM system in an effort to ensure that PRISM review teams did not miss grantee deficiencies, such as in providing assistance to children and parents with limited English proficiency. CCB provided assistance to raise program awareness among parents with limited English proficiency whose children may be eligible for CCDF subsidies. Officials told us that CCB had translated a number of its consumer education materials into Spanish, including the CCDF program brochure and public service announcements informing parents where and how to locate child care. In a targeted effort to reach Hispanic families and providers, CCB also translated into Spanish a brochure outlining what providers should know about child care assistance for families. CCB, through a cooperative agreement with the National Association of Child Care Resource and Referral Agencies (NACCRRA), provides educational information to parents through the Child Care Aware Web site (www.childcareaware.org). In addition, NACCRRA has translated consumer education publications into Spanish, including a publication on paying for child care, which it made available through its Web site to resource and referral agencies nationwide. CCB officials told us that they were also looking into translating these publications into Chinese. CCB also sponsors a National Child Care Information Center Web site (www.nccic.org), which offers information on a wide range of child care issues, including a number of documents that relate to serving children from families with limited English proficiency. CCB officials told us that they provided opportunities for agencies and providers to share information, including information on serving children of parents with limited English proficiency. For example, CCB convened meetings of state CCDF administrators that, while not focusing specifically on issues of limited English proficiency, covered topics such as meeting the needs of diverse groups of children and parents. In addition, CCB maintains an online forum for states to pose questions and share ideas, which has been used to discuss such issues as converting print materials into Spanish. CCB also offers child care providers online access to training modules, practical strategies for serving children and families, and interactive online chats in English and Spanish through the Center on Social and Emotional Foundations for Early Learning Web site (www.csefel.uiuc.edu). While it has made efforts to assist states with serving the needs of children whose parents have limited English proficiency, CCB has no mechanism for reviewing how agencies provide access to CCDF subsidies for eligible children of parents with limited English proficiency or ensuring that these children are not inadvertently excluded as a result of state CCDF eligibility criteria that are inconsistent with agency guidance. CCB officials told us that CCDF is a block grant and CCB receives no funding specifically for supporting monitoring activities. As a result, CCB’s oversight of CCDF is limited to reviewing states’ CCDF plans and investigating complaints. CCB, however, does not require states to include assurances in their CCDF plans that state agencies are providing access to CCDF subsidies for children of parents with limited English proficiency. Regional officials told us that they had complaint processes in place and would either review complaints or refer them to OCR, but said that they were unaware of any complaints regarding restricted access for individuals with limited English proficiency. Officials in one region told us that states appeared to understand the CCDF program eligibility criteria. Officials in another region told us that while they interacted with states through phone calls and occasional on-site visits, these contacts primarily focused on the provision of technical assistance. Thus, these interactions were not a systematic review of how states determine eligibility for federal child care assistance. On our site visit to Arkansas, we found that the state had eligibility requirements that appeared to violate CCB guidance. Specifically, although guidance to state agencies administering CCDF clarified that only the citizenship and immigration status of a child was relevant when determining the child’s eligibility for federal child care assistance, applicants for child care assistance in Arkansas had to submit a declaration that the applicant (typically a parent applying to receive assistance for the child) and all the other members of the household were U.S. citizens, nationals, or legal residents. In addition, the state’s policy manual for the administration of CCDF services indicated that the state would deny any applications for child care assistance that were submitted by parents or custodians who were neither citizens nor lawfully admitted residents. These requirements have the potential of precluding children who otherwise met the eligibility criteria from receiving federal financial assistance on the basis of their parents’ citizenship or immigration status. CCB officials told us that they were unaware of the situation until we brought it to their attention and that they were in the process of discussing with state officials how to resolve it. They further noted that they would investigate formal complaints brought to their attention, which would include complaints about states requesting unnecessary information on their child care subsidy applications and adversely affecting individuals with limited English proficiency. However, officials indicated that they had received no such complaints from affected parties. Access to high-quality child care and early education programs helps promote healthy development of children and can provide an important support for parents as they pursue employment or education to secure the family’s economic well-being and avoid public assistance. The resources available for nonentitlement child care and early education programs, such as CCDF subsidies and Head Start, are limited and not intended to cover everyone who meets eligibility criteria and is in need of assistance. Consequently, agencies have to make choices about who they will cover with the limited funds, employing strategies such as prioritization of certain groups of applicants or waiting lists. At the same time, federal, state, and local entities play important roles in ensuring that parents’ language ability does not preclude children from being considered for coverage under these programs. These roles are becoming especially important as the demographics of many communities are changing rapidly and localities across the country are seeing increased numbers of individuals with limited English proficiency. While state and local agencies are making efforts to address the needs of this growing population, they experience difficulties offering language assistance to parents seeking to access programs for their children and recruiting new providers with the language ability to serve these families. However, without reliable data on who is enrolled in their programs, state and local officials may have difficulty determining the extent to which parents with limited English proficiency have access to these programs for their children and whether services need to be adjusted to accommodate changes in the population served. Although Congress provided states with flexibility in administering their CCDF program grants, HHS is responsible for ensuring that states adhere to the conditions of their grants and that they take reasonable steps to ensure access to individuals with limited English proficiency. Yet, HHS’s existing methods for reviewing how CCDF funds are used by grantees do not systematically assess how access for parents with limited English proficiency is provided or identify state or local policies that may adversely affect these parents’ ability to access programs for their children. HHS responds to complaints of any alleged discrimination or agency actions that adversely affect the ability of eligible children to access programs and services. However, HHS may lack the tools to ensure equal access for children whose parents have limited English proficiency if the parents do not bring complaints for reasons such as language difficulties, unfamiliarity with how the complaint process works, or fear about approaching government agencies. Without a mechanism to systematically review access to CCDF-funded programs for these families, HHS cannot provide all eligible children with the same opportunity to participate in programs that would benefit them and their families and possibly enhance their households’ self-sufficiency. To help state and local agencies plan for language assistance and assess whether they provide meaningful access to eligible children, regardless of their parents’ English ability, we recommend that CCB work with states to help them explore cost-effective strategies for collecting data on CCDF subsidy recipients’ language preference or English proficiency and comparing these data with available information on community demographics. Once these data are available, HHS may consider collecting information on existing cost-effective ways for agencies to provide language assistance and to recruit providers who speak other languages, as well as disseminating this information in the locations where the data show the greatest need. To provide opportunities to parents with limited English proficiency to access federal child care subsidies for their children, we recommend that HHS develop and implement specific steps to review whether and how states provide access to CCDF programs for eligible children of parents with limited English proficiency, as well as provide information to help states evaluate their progress in this area. Specifically, HHS should revise the CCDF plan template to require states to report on how they will provide meaningful access to parents with limited English proficiency seeking CCDF subsidies for their children, and systematically review states’ program eligibility criteria for CCDF subsidies to ensure that states comply with HHS policies related to participation by children of parents with limited English proficiency. ACF provided written comments on a draft of this report, which are reproduced in appendix III. In its letter, ACF agreed with most aspects of our recommendations and provided information on its actions or plans that would support their implementation. In addition, ACF provided a number of technical comments that we incorporated as appropriate. In response to our recommendation that HHS work with states to help them explore cost-effective ways of collecting data on the primary language of CCDF subsidy recipients, ACF provided some additional information on actions it has taken to help states in this area. For example, it stated that in July 2006, CCB launched a technical assistance initiative that will, among other things, disseminate information to states on effective strategies to assist families with subsidy access, including families experiencing language barriers. Regarding our second recommendation, that HHS develop a mechanism to review how states provide access to CCDF subsidies for children of parents with limited English proficiency, ACF indicated that it will examine the feasibility of using the CCDF plan template to ask states to report on their efforts to promote access to these families. However, ACF did not address our recommendation that HHS systematically review states’ eligibility criteria for CCDF subsidies to ensure that states comply with HHS policies related to participation by children whose parents have limited English proficiency. ACF also submitted detailed comments related to our analysis of national survey data collected in 1998 as part of ECLS-K. ACF noted that ECLS-K data only provide information on children in the year before kindergarten and that the analysis omits other variables that may explain our findings, such as preferences for certain types of care within ethnic communities and parents’ immigration status. Our report discusses these data limitations, and as is the case with any statistical model, some of the factors with the potential to affect the outcomes we examined could not be included because the data measuring them were not collected. It is partly for that reason that we employed multiple methodologies in addressing our research objectives, including site visits and focus groups. ACF noted that the data represent child care and early education patterns for 1997 and that subsequent policy changes or increases in federal and state child care funding, may have narrowed the gap in program participation among different groups of children. However, we found that some of the policy changes ACF cited were not consistently implemented and ACF provided no more current data that would allow us to ascertain the effects of these changes. As such, ECLS-K remained the most recent national dataset that allowed us to compare children of parents with limited English proficiency and similar children whose parents are proficient in English with respect to their receipt of financial assistance for child care from a social service or welfare agency and their participation in Head Start. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of HHS, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215. Other contacts and major contributors are listed in appendix IV. In conducting our work, we employed multiple methodologies, including a review of available data on participation of children in child care and early education programs, state and county site visits, focus groups with mothers who have limited English proficiency, interviews with federal officials and national experts, and a review of available legislation, guidance, and other federal resources. We performed our work in accordance with generally accepted government auditing standards between July 2005 and June 2006. To obtain information on the participation of children whose parents have limited English proficiency in child care and early education programs funded through the Child Care and Development Fund (CCDF) and Head Start, we obtained and reviewed the most recent program participation data from the U.S. Department of Health and Human Services (HHS), surveyed states about their data on CCDF subsidy recipients, and analyzed national survey data available through the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K). The relevant characteristics of data sources we examined are shown in table 1. We reviewed CCDF program participation data collected by CCB in the reports that states are required to submit on CCDF subsidy recipients but found that these reports did not contain any data related to language from CCDF subsidy recipients or their families. CCB officials confirmed that they do not currently collect any language data, since such data collection was not listed in the CCDF authorizing legislation. We reviewed language data for Head Start participants available from the Office of Head Start through the Head Start National Reporting System (HSNRS) and the Head Start Family and Child Experiences Survey (FACES). HSNRS, implemented in August 2003, is the nationwide skills test of over 400,000 children aged 4 and 5 in Head Start, intended to provide information on how well Head Start grantees are helping children progress. The Computer-Based Reporting System (CBRS) was developed for HSNRS to allow local Head Start staff to enter descriptive information about their programs, including the demographic characteristics of children assessed by HSNRS. We requested and reviewed HSNRS demographic data from spring 2005 that provided information on the primary language of children in Head Start. FACES is a series of longitudinal surveys of nationally representative samples of children in Head Start. We requested and reviewed fall 2003 FACES data, which included about 2,400 parent interviews that provided information on the languages spoken at home by Head Start families, parents’ self-reported English proficiency, and the availability of Head Start staff to communicate with children and parents in their preferred language. To assess the reliability of Head Start data, we interviewed relevant HHS officials and officials from Westat, a private research corporation administering and analyzing HSNRS and FACES under a contract with the Office of Head Start. In addition, we reviewed relevant documentation and examined the logs of the computer code used to generate the data provided to us. Because HSNRS data were collected only for 4- and 5-year-old children in Head Start, they cannot be used to generalize about all children in Head Start. The HSNRS data were entered into CBRS by the staff of local Head Start programs. While we did not independently verify these data, we did not find any evidence to suggest that they were unreliable. As part of FACES, interviews were held directly with parents of children in Head Start. While Spanish interviewers were available, parents with limited English skills who spoke other languages were required to provide their own interpreter. Parents unable to participate in an interview in English or Spanish or provide their own interpreters could not be included in the survey. According to a Westat official, however, only three interviews could not be conducted because of the lack of an interpreter. We determined that FACES data were sufficiently reliable for the purposes of this report. Because the available agency data did not allow us to determine the total participation of children of parents with limited English proficiency in federal child care and early education programs, we also analyzed survey data provided by NCES from ECLS-K, a national longitudinal study focusing on following children’s early education and school experiences from kindergarten through 12th grade. We used data from the fall 1998 base year survey of approximately 18,000 parents with children in kindergarten. ECLS-K was the most recent national dataset that allowed us to compare child care, financial assistance for child care, and Head Start usage rates among children with parents who had limited English proficiency and children whose parents were proficient in English. Among other topics, ECLS-K asked parents about their English proficiency, the languages spoken at home, their child’s use of child care in the year before kindergarten, any financial assistance from a social service or welfare agency, and the child’s use of Head Start. The survey did not ask for the specific social service or welfare agency providing financial assistance for child care, so we were unable to make estimates about the use of CCDF subsidies from this dataset. NCES had bilingual interviewers available to conduct the survey in Spanish, Chinese, Hmong, and Lakota if the respondent was not able to speak English and no English-speaking member of the household was available. Slightly more than 7 percent of the interviewers were conducted in a language other than English. More information about our analysis of ECLS-K data can be found in appendix II. To assess the reliability of ECLS-K data, we reviewed relevant information about the survey, including the user manual, data dictionary, and steps taken to ensure the quality of these data, and performed electronic testing to detect obvious errors in completeness and reasonableness. We determined that the ECLS-K data were sufficiently reliable for the purposes of this report. We also contacted child care administrators in all 50 states and the District of Columbia to determine whether any states collected their own data on the language of CCDF subsidy recipients. We discussed data collection with officials in 5 states in the course of our site visits and contacted officials in the remaining 45 states and the District of Columbia by e-mail. Of those contacted by e-mail, 40 states and the District of Columbia responded. Overall, 12 states and the District of Columbia collected some language data from parents whose children received CCDF subsidies. We then followed up with officials in the District of Columbia and all 12 states that reported collecting the data on the language of CCDF subsidy recipients to ask questions about the type of data collected, the methods by which the data were collected, the challenges states faced in collecting the data, and the purposes for which the data were used. We did not ask states to submit their data to us because we determined that the differences in states’ data collection approaches and the limitations of state data would preclude us from aggregating state data to produce national estimates of CCDF subsidy use among children of parents who speak other languages. To obtain information on the challenges that parents with limited English proficiency face in accessing CCDF subsidies and Head Start and the assistance provided to these families by state and local entities, we visited 5 states—Arkansas, California, Illinois, North Carolina, and Washington. We selected these states on the basis of the size and growth of their population of individuals with limited English proficiency as determined by our analysis of 1990 and 2000 data from the U.S. Census Bureau, the states’ geographic location, and the presence of initiatives focused on individuals with limited English proficiency as determined by our review of CCDF plans that states are required to submit to CCB every 2 years. We visited 10 counties across these states, as well as contacted officials in 1 county by telephone. We selected counties with substantial numbers of individuals with limited English proficiency or that have experienced a significant growth in this population based on the analysis of 1990 and 2000 U.S. Census data. (See table 2.) In choosing counties, we also considered the proportion of residents living in urban and rural parts of the county to obtain information on the experiences of families in both urban and rural areas. On each site visit, we interviewed various stakeholders in the child care and early education field at the state and local levels, including officials responsible for administering CCDF subsidies, representatives of child care resource and referral agencies, Head Start officials, and child care and early education providers, as well as officials from community organizations and advocacy groups working with individuals who have limited English proficiency. To obtain information on the challenges that parents with limited English proficiency face when accessing child care subsidies for their children, we conducted 12 focus groups with mothers who had limited English proficiency in California, Washington, and North Carolina. We selected these locations in order to include both states with historically large populations of individuals with limited English proficiency (California and Washington) and a state experiencing a more recent growth in this population (North Carolina)—based on our analysis of data from the U.S. Census. GAO contracted with Aguirre International, a firm specializing in applied research with hard-to-reach populations, to recruit focus group participants through community-based organizations, arrange facilities for focus groups in locations familiar and accessible to the participants, provide transportation to and from child care during the focus groups, moderate the group discussions, and translate focus group transcripts. Focus groups were conducted from January 2006 to March 2006. Consistent with focus group data collection practices, our design involved multiple groups with certain homogeneous characteristics. All focus groups were conducted with mothers of children aged 5 or younger enrolled in child care. These mothers also had limited English proficiency as self-reported by potential participants during the focus group recruitment process and were eligible for CCDF subsidies as determined by family’s income and parental work and education activities. The focus groups varied by primary language spoken and whether or not participants’ children were receiving government child care subsidies. Eight of the 12 focus groups were conducted in Spanish and 4 in Vietnamese. We chose to conduct focus groups in Spanish and Vietnamese because these two languages were among the most prevalent languages, other than English, spoken in the states of interest. According to 2000 Census data, Spanish was the language most commonly spoken among these households in the states we visited. In Washington, Vietnamese was the most commonly spoken language after Spanish, and in California, Vietnamese was the second most commonly spoken language after Spanish. We did not conduct focus groups in Vietnamese in North Carolina because of the limited number of individuals who spoke languages other than English or Spanish in the state. Six of the focus groups consisted of mothers with young children (ages 0-5) who were enrolled in child care and received a government subsidy for that care; the other 6 groups consisted of mothers with young children (ages 0-5) who were enrolled in child care and did not receive a government subsidy for that care, but whose children likely qualified for subsidies based upon their family’s income and employment or education activities. Table 3 describes the characteristics of the group at each location and lists locations and dates for each focus group conducted. The number of participants in each focus group ranged from 6 to 13. To help the moderator lead the discussions, GAO developed a guide that included open-ended questions related to mothers’ experiences finding appropriate child care and attempting to access financial assistance to help pay for the care. Discussions were held in a structured manner and followed the moderator guide. Focus groups involve structured small group discussions designed to gain in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Methodologically, they are not designed to provide results generalizable to a larger population or provide statistically representative samples or reliable quantitative estimates. They represent the responses only of the mothers who participated in our 12 groups. The population of individuals with limited English proficiency in the United States consists of many cultural backgrounds and languages in addition to Spanish and Vietnamese, and those and other factors may influence the experience and attitudes of parents with limited English proficiency regarding child care. Therefore, the experiences of other mothers may be different from those of focus group participants. In addition, while the composition of the groups was designed to include different states, languages, and subsidy participation status, the groups were not random samples of mothers with limited English proficiency. To assess HHS’s efforts to ensure access to its programs for parents with limited English proficiency, we interviewed HHS officials, reviewed documents and guidance produced by HHS for state and local grantees, and analyzed relevant legislation. We interviewed officials from CCB, the Office of Head Start, HHS’s Office for Civil Rights, and the five HHS regional offices that covered the states that we visited. We also reviewed informational materials produced by HHS to facilitate access to programs for individuals with limited English proficiency and online resources pertaining to language access that were available through HHS’s and the Department of Justice’s Web sites. Additionally, we analyzed relevant legislation, federal regulations, and reports from research organizations. Finally, to obtain information pertaining to our research objectives, we interviewed officials from various national organizations working on issues related to early child care and education, as well as organizations advocating on behalf of individuals with limited English proficiency. We analyzed national survey data collected in 1998 as part of the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 (ECLS-K) from parents of kindergarten children about their children’s experiences in the year before kindergarten. To conduct our analyses, we used logistic regression models to estimate the “net effects” of the parent’s limited English proficiency on children’s child care and early education patterns. We defined parents as having limited proficiency in English if the parent participating in the interview reported that a language other than English was spoken at home, and if the respondent him or herself reported speaking English either “not very well” or “not well at all.” We made this decision because we surmised that speaking is one of the main channels through which information about child care is communicated. Additionally, we made the decision to focus on the English language ability of the parent participating in the interview on the assumption that the respondent participating in the survey about his or her child would have a primary role in child care decisions. We considered the effect of the parent’s limited English proficiency on four outcomes. First, we looked at the effect it had on the likelihood of their child receiving any type of nonparental child care in the year before the child was in kindergarten, regardless of whether the care was provided in a child care center (including a prekindergarten program) or by relatives or nonrelatives in some other setting. Second, we looked at the effect that limited English proficiency had on the likelihood of receiving financial assistance from a social service or a welfare agency to help pay for child care among those who did receive child care. Third, we looked at the effect that limited English proficiency had on the likelihood that the child care provided was in a center-based facility (rather than care provided by relatives or nonrelatives) because it has been suggested that children whose parents have limited English proficiency may be less likely to receive center-based care than other children. Fourth, and finally, we considered whether limited English proficiency affected the likelihood of participating in Head Start. By “net effects,” we mean the effects of limited English proficiency that operate after we control for other factors that affect these different outcomes and that are related to limited English proficiency. The most obvious among these other factors is race or ethnicity. That is, the probability of using any nonparental care, receiving financial assistance for child care, having center-based care rather than some other form of care, and participating in Head Start are different among racial and ethnic groups, and English proficiency is vastly different for some groups, particularly Hispanics and Asians, than for whites and other races. As such, after looking at the difference between children of parents with limited English proficiency and other children on these outcomes, we used multivariate logistic regression models to re-estimate this difference when controlling for the effect of characteristics such as the child’s race or ethnicity. The other characteristics we controlled included household income (because of its effect on eligibility for some child care assistance programs and Head Start) and parental education(because previous studies have shown it to have an effect on participation in child care and early education programs). We also controlled for the number of persons over 18 in the household and whether the parent or parents in the house were employed because these can affect the availability of care givers in the home and determine the need for child care and child care assistance outside of the home. Another reason why we controlled for parental employment status is that it is one of the factors considered for CCDF eligibility. When we looked at the likelihood of receiving any care or receiving that care in a center-based facility, as well as at the likelihood of receiving financial assistance for care received, we controlled for whether the family participated in Head Start, since we surmised this may affect whether additional child care was needed. Additionally, because we thought that being in multiple types of child care may affect the likelihood of one of them being provided in a center-based facility or being subsidized by an outside source, we also controlled for whether the child received multiple types of child care when we looked at the likelihood of a child being in center-based care or receiving financial assistance for child care. Finally, when we looked at whether financial assistance was received for the care, we controlled for whether the care was provided in a center-based facility on the assumption that the cost of care may be higher when it is provided in a formal center-based setting. Additionally, other factors, such as family preferences for a certain type of care and parents’ immigration status, as well as changes in the CCDF program and child care policies within a particular state of residence may affect child care and early education patterns of children. We partially mitigated the potential effect of preferences for certain types of care on the receipt of financial assistance for child care by controlling for whether or not the child was in center-based care. However, we could not include all factors that may have had an effect on the outcomes in the analysis because the ECLS-K did not collect the data to measure them. An understanding of how to interpret the results of these multivariate logistic regression models is facilitated by first considering tables 4 and 5, which estimate the effects of limited English proficiency, and race or ethnicity, on the first two of these four outcomes. Tables 4 and 5 estimate how English proficiency and race or ethnicity are related to receiving any nonparental child care and to receiving financial assistance for child care (among those who received any nonparental child care). It is important to note that these estimates are unadjusted for other characteristics that are related to these outcomes, such as education, income, and employment status. The top section of tables 4 and 5 shows the effect of parents’ limited English proficiency on the two outcomes, the middle section shows the effect of the child’s race or ethnicity, and the bottom section shows the joint effect of the two, or the effect of limited English proficiency within each racial or ethnic category. We show these effects in each section of the tables by first providing percentages of children of parents with limited English proficiency and other children having a certain outcome. We then calculate odds and odds ratios for the likelihood of children within each of the two groups having these outcomes. Odds and odds ratios are the measures used to describe effects that underlie the logistic regression models we later employ to estimate net effects of limited English proficiency while controlling for other factors. Consider table 4, which provides percentages, odds, and odds ratios related to the differences in receiving any type of child care across children that differ by their parents’ English proficiency, their race or ethnicity, and both. We see in the top section that while approximately 75 percent of children whose parents are English proficient received some form of child care in the year preceding kindergarten, the same is true of only 46 percent of children whose parents have limited English proficiency. These percentages are derived from weighted data in our sample that take account of the fact that we are working with a sample that is not a simple random sample (where all individuals have an equal chance of being selected), but one in which children in some groups, namely Asians and Pacific Islanders, were oversampled. They are based, however, on the unweighted number of cases in our sample of 18,033 respondents (16,784 of them with parents proficient in English and 1,249 with parents with limited English proficiency), given in the third column of the table. The difference in these two percentages is sizable, and statistically significant, and would lead us to conclude that children of parents with limited English proficiency are less likely to receive nonparental care of any form. An alternative way to look at this difference is by calculating the odds of receiving child care, which is the percentage of children who receive child care divided by the percentage of children who do not. In the case of children of parents that are English proficient, these odds are 74.8/25.2 = 2.97, which implies that in that group, approximately 3 families use child care for every family that does not (or that 300 families do for every 100 families that do not). In the case of children of parents that are not English proficient, these odds are 45.6/54.4 = 0.84, which implies that for them, approximately 0.8 families use child care for every family that does not (or that 80 families do for every 100 that do not). The ratio of these two odds, or 0.84/2.97 = 0.28, tells us that the odds on receiving any care are decidedly lower for children of parents with limited English proficiency than for children of parents that are English proficient, by a factor of 0.28. The middle section of table 4 shows the differences in the percentages and odds of children receiving child care across racial or ethnic categories. The percentages of children receiving child care in the year before kindergarten are lower for minority children than for whites, and these differences are reflected in the odds as well. Among white children, about 3.6 children received child care for every child that did not, while among blacks and Asians approximately 2.4 children received child care for every child that did not. Among Hispanics, approximately 1.5 children received child care for every child that did not. Where variables have more than two categories, such as different categories of race and ethnicity, we chose one category as the reference category and calculated odds ratios that reflect how different each of the other categories is relative to that one. In this case, whites were chosen as the reference category, and the odds ratios of 0.67, 0.42, 0.66 and 0.54 indicate how much lower the odds of receiving child care were for blacks, Hispanics, Asians, and other races, respectively, than for whites. The bottom section of table 4 shows the differences in the percentages of children receiving any child care across the joint (or combined) categories of parents’ English proficiency and the child’s race or ethnicity. Here we have calculated odds for each of the joint categories, and the odds ratios, which indicate how different the odds are across English proficiency categories, within each category of race or ethnicity. We can see that within most categories of race or ethnicity, children of parents with limited English proficiency have lower odds of receiving any child care than children of parents that are proficient in English, by factors such as 0.38 for Hispanics and 0.40 for Asians. The odds ratios for whites, blacks, and others were based on very small numbers of children of parents with limited English proficiency. Of the 1,249 children of parents with limited English proficiency, only 34, 10, and 8 children are white, black, and other, respectively, and these numbers are too small for us to assess whether and how much they differ from children of parents that are proficient in English. In sum, table 4 indicates that children of parents with limited English proficiency were less likely to receive any child care than children of parents proficient in English. Some of this is due to the fact that children of parents with limited English proficiency tend to be Hispanic and Asian, groups that are less likely than whites to receive child care. However, not all of it is due to race or ethnicity differences, since among Hispanics and Asians the children of parents with limited English proficiency were less than half as likely as others within the same racial or ethnic group to receive any child care. Table 5 provides similar information with respect to the likelihood of receiving financial assistance for child care, among those children that received any care. Overall, children of parents with limited English proficiency were less likely than those with parents proficient in English to receive financial assistance (odds ratio = 0.60), though most racial or ethnic minorities, except for Asians, were more likely than whites to receive financial assistance when they received some type of care. That is, while Hispanic children were twice as likely as white children to receive financial assistance, and blacks and other races were approximately four times as likely, Asians’ odds of receiving financial assistance were not statistically distinguishable from those of whites (odds ratio = 0.70). Further, in the two groups—Hispanics and Asians—that had sizable numbers of children of parents with limited English proficiency, the effect of limited proficiency was different. Among Hispanics, the odds of receiving financial assistance were lower for children of parents with limited English proficiency than for children of parents that were proficient in English (odds ratio = 0.46), while among Asians the odds of receiving financial assistance were not statistically distinguishable between children of parents with limited English proficiency and children of parents that were proficient in English (odds ratio = 1.95). Among the other groups, the numbers of children of parents with limited English proficiency who received child care in the year prior to kindergarten were too small for us to be able to reliably detect any difference between them and others in the likelihood of receiving financial assistance. The tables above showed the gross or unadjusted differences in receiving child care and receiving financial assistance for child care between children of parents with limited English proficiency and children of parents proficient in English, and what those differences look like when we control for or take account of race or ethnicity, the factor with which parents’ limited English proficiency is most closely associated. However, limited English proficiency is associated with a number of other factors that affect these two outcomes, as well as the other two outcomes that were of interest to us, which were the likelihood of receiving center-based care (as opposed to care from relatives or nonrelatives in some other setting) and the likelihood of participating in Head Start. Tables 6 through 9 show that the percentages of children that are Hispanic or Asian, from lower-income families, have less educated parents, and have three or more persons in the household over the age of 18 are higher among children of parents with limited English proficiency than among other children. Tables 10 and 11 show that the percentage of children that have their parent (in single parent households) or both parents working and the percentage of children that receive multiple types of care are lower among children of parents with limited English proficiency than among other children. In tables 12 through 15 we show what the adjusted effect of parents’ limited English proficiency is on the likelihood of their child (1) receiving any nonparental child care, (2) receiving financial assistance for child care, (3) receiving center-based care, and (4) participating in Head Start, when we estimate its effect using logistic regression models to control for the effects of the other factors. In the first two columns of each table, we show the unadjusted effect of parents’ limited English proficiency on each outcome across all racial/ethnic groups, and what the adjusted effect looks like when we control for race or ethnicity and other factors. In the third and fourth columns of each table, we show the unadjusted and adjusted effect of parents’ limited English proficiency for Hispanics, and in the last two columns we show those same effects for Asians. Separate analyses were done only for Hispanics and Asians because, as table 6 shows, the percentage of children of other races whose parents have limited English proficiency was very small. For the adjusted models, we also show the effects of the other factors that we controlled for, such as income and education, on the four outcomes. In the case of variables that have multiple categories (such as race or ethnicity, income or poverty status, education, and number of persons in the household over 18 years of age), the odds ratios indicate how much more or less likely the categories of families indicated are to have each outcome than the reference (or omitted) category. The reference category for race or ethnicity is white, the reference category for poverty status is less than 100 percent of the federal poverty level, the reference category for education is less than high school graduate, and the reference category for the number of persons in the household over 18 is one. Likelihood of receiving any nonparental care. Table 12 shows that before adjusting for other factors, the effect of parents’ limited English proficiency on the likelihood of receiving any type of nonparental childcare was negative and significant for all groups considered together, and for Hispanics and Asians considered separately (odds ratios of 0.28, 0.38, and 0.40, respectively). After controlling for these other factors, the differences between children of parents with limited English proficiency and other parents in terms of their receipt of any type of child care were smaller for all groups considered together and for Hispanics (odds ratios of 0.77 and 0.75, respectively), but not statistically significant among Asians (odds ratio of 0.85). While almost all of the control variables attain statistical significance in the model that included all racial and ethnic groups, the statistical significance of individual control variables in the models including only Asian or Hispanic children varies. Likelihood of receiving financial assistance for child care. Table 13 shows that before adjusting for other characteristics, the odds ratios estimating the effect of parents’ limited English proficiency on the likelihood of receiving financial assistance for child care were 0.60, 0.46, and 1.95 for all groups together, Hispanics, and Asians, although the result for Asians was not statistically significant. While other factors were significantly related to the likelihood of receiving financial assistance for child care, controlling for their effects did not markedly diminish the estimated difference between children of parents with limited English proficiency and other children overall, or for Hispanics or Asians. After other factors are taken into account, children of parents with limited English proficiency were about half as likely as others to receive financial assistance overall and among Hispanics (odds ratios of 0.41 and 0.44, respectively), but among Asians the difference was not statistically significant (odds ratio = 1.85). Likelihood of receiving center-based care. Table 14 shows that before adjusting for other factors, the effect of parents’ limited English proficiency on the likelihood of receiving center-based child care among those who received any type of child care was significant when all racial/ethnic groups were considered together (odds ratio = 0.44), and significant for Hispanics (odds ratio = 0.73) but not for Asians (odds ratio = 0.92). None of the differences between children of parents with limited English proficiency and other children were statistically significant, however, after we controlled for other factors. Likelihood of participating in Head Start. Table 15 shows that before adjusting for other factors, children of parents with limited English proficiency had higher odds of participating in Head Start when all ethnic/racial groups were considered together (odds ratio = 1.39). The same was true when Asians were considered separately (odds ratio = 3.81), but no significant effect of parents’ limited English proficiency was found for Hispanics (odds ratio = 0.98). After controlling for other characteristics, children of parents with limited English proficiency had significantly lower odds of participating in Head Start when all racial/ethnic groups were considered together (odds ratio = 0.67), and when Hispanics were considered separately (odds ratio = 0.69), but significantly higher odds among Asians (odds ratio = 1.90). Betty Ward-Zukerman (Assistant Director) and Natalya Barden (Analyst- in-Charge) managed all aspects of the assignment. Laurie Latuda, Janet Mascia, Jonathan McMurray, and Ethan Wozniak made key contributions to multiple aspects of the assignment. Alison Martin, Grant Mallie, Amanda Miller, Anna Maria Ortiz, James Rebbe, and Douglas Sloane provided key technical assistance. | Questions have been raised about whether parents with limited English proficiency are having difficulty accessing child care and early education programs for their children. Research suggests that quality early care experiences can greatly improve the school readiness of young children. GAO was asked to provide information on (1) the participation of these children in programs funded through the Child Care and Development Fund (CCDF) and Head Start, (2) the challenges these families face in accessing programs, (3) assistance that selected state and local entities provide to them, and (4) actions taken by the Department of Health and Human Services (HHS) to ensure program access. To obtain this information, GAO analyzed program and national survey data, interviewed officials in 5 states and 11 counties, held 12 focus groups with mothers with limited English proficiency, and interviewed experts and HHS officials. HHS's Child Care Bureau (CCB) did not have information on the total enrollment in CCDF programs of children whose parents had limited English proficiency, but data collected by its Office of Head Start in 2003 showed that about 13 percent of parents whose children were in Head Start reported having limited English proficiency. The most recent (1998) national survey data showed that children of parents with limited English proficiency were less likely than other children to receive financial assistance for child care from a social service or welfare agency or to be in Head Start, after controlling for selected characteristics. Eighty-eight percent of these children were Hispanic, and their results differed from Asian children. Analysis of data from focus groups and site visit interviews held by GAO revealed that mothers with limited English proficiency faced multiple challenges, including lack of awareness of available assistance, language barriers during the application process, and difficulty communicating with English-speaking providers. Some of the challenges that low-income parents with limited English proficiency experienced, such as lack of transportation and shortage of subsidized child care slots, were common to other low-income families. The majority of state and local agencies that we visited offered some oral and written language assistance, such as bilingual staff or translated applications. Agencies in the majority of locations visited also made efforts to increase the supply of providers who could communicate with parents. Officials reported challenges in serving parents with limited English proficiency, such as difficulty hiring qualified bilingual staff. Some officials indicated that additional information on cost-effective strategies to serve this population would facilitate their efforts. HHS issued guidance, translated materials, and provided technical assistance to grantees to help them serve children of parents with limited English proficiency. The Office of Head Start reviewed programs' assessments of their communities' needs and conducted formal monitoring reviews, but could not ensure that review teams consistently assessed grantees' performance on the standards related to language access. CCB reviewed states' plans on the use of CCDF funds generally and investigated specific complaints, but had no mechanism for reviewing how and whether states provide access to CCDF subsidies for eligible children of parents with limited English proficiency. |
In 1990, we designated DOE program and contract management as an area at high risk of fraud, waste, abuse, and mismanagement. In January 2009, to recognize progress made at DOE’s Office of Science, we narrowed the focus of the high-risk designation to two DOE program elements—NNSA and the Office of Environmental Management. In February 2013, our most recent high-risk update, we further narrowed this focus to major projects (i.e., projects over $750 million) at NNSA and the Office of Environmental Management. DOE has taken some steps to address our concerns, including developing an order in 2010 (Order 413.3B) that defines DOE’s project management principles and process for executing a capital asset construction project, which can include building or demolishing facilities or constructing remediation systems. NNSA is required by DOE to manage the UPF construction project in accordance with this order. The project management process defined in Order 413.3B requires DOE projects to go through five management reviews and approvals, called “critical decisions” (CD), as they move forward from project planning and design to construction to operation. The CDs are as follows: CD 0: Approve a mission-related need. CD 1: Approve an approach to meet a mission need and a preliminary cost estimate. CD 2: Approve the project’s cost, schedule and scope targets. CD 3: Approve the start of construction. CD 4: Approve the start of operations. In August 2007, the Deputy Secretary of Energy originally approved CD 1 for the UPF with a cost range of $1.4 to $3.5 billion. In June 2012, prior to the UPF contractor’s August 2012 determination that the facility would need to be enlarged due to the space/fit issue, the Deputy Secretary of Energy reaffirmed CD 1 for the UPF with an estimated cost range of $4.2 to $6.5 billion and approved a phased approach to the project, which deferred significant portions of the project’s original scope. According to NNSA documents, this deferral was due, in part, to the multibillion dollar increase in the project’s cost estimate and to accelerate the completion of the highest priority scope. In July 2013, NNSA decided to combine CD 2 and CD 3 for the first phase of UPF, with approval planned by October 2015.of June 2012, and proposed start of operations. Table 1 shows the UPF’s phases, scope of work, cost estimate as Infrastructure Strategy for the Y-12 plant. In early February 2014, the NNSA Deputy Administrator for Defense Programs directed his staff to develop an Enriched Uranium Infrastructure Strategy to establish the framework of how NNSA will maintain the Y-12 plant’s uranium mission capabilities into the future. Key aspects considered during the strategy’s development included, among other things: (1) an evaluation of the uranium purification capabilities currently conducted in building 9212 and the throughput needed to support requirements for life extension programs and nuclear fuel for the U.S. Navy; (2) an evaluation of the alternatives to the UPF that prioritizes replacement capabilities by risk to nuclear safety, security, and mission continuity; (3) an identification of existing infrastructure as a bridging strategy until replacement capability is available in new infrastructure. A draft of the strategy was delivered to the Deputy Administrator in April 2014. NNSA is currently revising the draft, and an NNSA official said that the agency has not yet determined when it will deliver a revised version to the Deputy Administrator. NNSA is currently evaluating alternatives to replacing enriched uranium operations at the Y-12 plant with a single facility. In early January 2014, NNSA began to consider options other than the UPF for enriched uranium operations at the Y-12 plant because, according to the UPF Federal Project Director, the project is facing budget constraints, rising costs, and competition from other high-priority projects within NNSA—such as the planned B61 bomb and W78/88 warhead nuclear weapon life extension projects. On April 15, 2014, NNSA completed a peer review that identified an alternative to replacing enriched uranium operations with a single facility. The results of the review, which were released to the public on May 1, 2014, included a proposed solution for replacing or relocating only Building 9212 capabilities (uranium purification and casting) by 2025 at a cost not exceeding $6.5 billion. This proposed solution would require NNSA to (1) construct two new, smaller facilities to house casting and other processing capabilities, (2) upgrade existing facilities at the Y-12 plant to house other uranium processing capabilities currently housed in Building 9212, and (3) appoint a senior career executive within NNSA’s Office of Defense Programs with the responsibility and authority to coordinate the agency’s overall enriched uranium strategy. As of July 2014, NNSA was still evaluating the review’s recommendations, but the NNSA Acting Administrator previously stated that NNSA does not plan to continue full operations in Building 9212, which has been operational for over 60 years, past 2025 because the building does not meet modern safety standards, and increasing equipment failure rates present challenges to meeting required production targets. In addition, according to NNSA officials, while NNSA was conducting its review, the UPF project team suspended some design, site preparation, and procurement activities that could potentially be impacted by the range of alternatives being considered. In January 2013, NNSA completed a review to identify the factors that contributed to the space/fit issue. This review took into account the actions completed by the contractor or in progress since the space/fit issue was identified, input from the contractor, and NNSA’s own experience with and knowledge of the project. NNSA identified a number of factors that contributed to the space/fit issue within both the contractor and NNSA organizations. Specifically: NNSA oversight. NNSA identified limitations in its oversight of the project. Specifically, NNSA determined that it did not have adequate staff to perform effective technical oversight of the project, and requests and directives from NNSA to the UPF contractor were not always implemented because NNSA did not always follow up. According to NNSA officials, when the space/fit issue was identified in 2012, the UPF project office was staffed by nine full-time equivalents (FTE). The Defense Nuclear Facilities Safety Board also raised concerns on several occasions prior to the space/fit issue about whether this level of staffing was adequate to perform effective oversight of the contractor’s activities. Design integration. NNSA found that the design inputs from subcontractors for the contractor’s 3D computer model, used to allocate and track space usage within the facility, were not well integrated. In 2008, the UPF contractor subcontracted portions of the design work, such as glovebox and process area design, to four subcontractors, and to track how these design elements fit together, the UPF contractor developed a model management system that generates a 3D computer model of the facility as the design progresses. This 3D model was intended to, among other things, allow the contractor to determine whether there is adequate space in the building’s design for all processing equipment and utilities, or whether changes to the design are necessary to provide additional space. However, according to NNSA officials, prior to the space/fit issue, the design work of the four subcontractors was not well integrated into the model, and as a result, the model did not accurately reflect the most current design. Communications. NNSA identified communications shortcomings throughout the project. For example, the contractor did not always provide timely notification to the NNSA project office of emerging concerns and did not engage NNSA in development of plans to address these concerns. NNSA found that there was reluctance on the part of the contractor to share information with NNSA without first fully vetting the information and obtaining senior management approval. In addition, NNSA found that a “chilled” work environment had developed within the UPF contractor organization, and that, as a result, communications from the working level and mid-level managers up to senior management were limited because of concerns of negative consequences. Furthermore, communications between the NNSA project office, the UPF contractor, and NNSA headquarters were limited by a complex chain of command. According to NNSA officials, prior to 2013, the UPF project was managed by NNSA’s Y-12 Site Office, and the UPF Federal Project Director reported to NNSA at a relatively low level. NNSA officials said that, as a result, any concerns with the UPF project had to compete for attention with many other issues facing the Y-12 site as a whole. Management processes and procedures. NNSA found that the contractor’s management processes and procedures did not formally identify, evaluate, or act on technical concerns in a timely manner. In addition, NNSA found that the UPF contractor’s project management procedures had shortcomings in areas such as risk management, design integration, and control of the technical baseline documents. Specifically, some of the contractor’s procedures were not project- specific and could not be used for work on the UPF project without authorizing deviations or providing additional instructions. According to NNSA, these shortcomings led in part to inadequate control of the design development process, as the contractor did not document interim decisions to deviate from the design baseline, adequately describe the design, or maintain it under configuration control. In response to NNSA’s review of the factors that contributed to the space/fit issue, NNSA and the UPF contractor have both taken some actions to address the factors identified by the review. In addition, NNSA has begun to share lessons learned from the UPF project consistent with both DOE’s project management order, which states that lessons learned should be captured throughout the course of capital asset construction projects, as well as our prior recommendation to ensure that future projects benefit from lessons learned. The specific actions NNSA and the contractor have taken include the following: NNSA oversight. NNSA has taken actions to improve its oversight of the UPF project to ensure that it is aware of emerging technical issues and the steps the contractor is taking to address them by, among other things, increasing staffing levels for the UPF project office from 9 FTEs in 2012 to more than 50 FTEs as of January 2014. According to NNSA officials, many of the additional staff members are technical experts in areas such as engineering and nuclear safety, and these additional staff have enabled NNSA to conduct more robust oversight of the contractor’s design efforts than was previously possible. For example, in July 2013, NNSA used some of these additional staff to conduct an in-depth assessment of the UPF contractor’s design solution for the space/fit issue. This assessment found that, among other things, as of July 2013, the facility design and 3D model were not sufficiently complete to determine whether there was adequate space remaining in parts of the facility to accommodate all required equipment while still providing adequate margin for future design changes during construction and commissioning. The assessment also found that the contractor’s monthly space/fit assessment reports, developed to evaluate and report on space utilization in the facility, were providing an overly optimistic view of space/fit, leading to a low level of senior management engagement in resolving these issues. According to NNSA officials, as of January 2014, the UPF contractor had taken actions to address many of the assessment’s findings, and the agency plans to continue to monitor the contractor’s performance closely in these areas through its normal oversight activities, such as attending periodic meetings to review the 3D model. Design integration. According to NNSA and UPF contractor officials, the UPF contractor took steps to better integrate the efforts of the subcontractors conducting design and engineering work on different elements of the facility. For example, in late 2012, the UPF contractor hired a model integration engineer to integrate the subcontractors’ design work and ensure that all design changes are incorporated into the model so that it accurately reflects the most current design. The model integration engineer also manages a team of subject matter experts who monitor space utilization in each individual process area as the design progresses and conduct monthly assessments of the space margins remaining in each area. In addition, the UPF contractor also developed a formal change control process to define and manage space within the 3D model. Under this process, design changes made by the individual design teams must be submitted to the model integration engineer for approval to ensure that they do not exceed the boundaries established for each process area or interfere with other equipment. Furthermore, changes that have a significant impact on equipment layout must be approved by a review board prior to being accepted and integrated into the model. According to contractor officials, as of January 2014, the subcontractor teams had submitted 111 change requests, and 75 requests had been approved. The officials said that they are working to reduce the remaining backlog. According to NNSA, the contractor also developed a monthly space/fit assessment process to evaluate and report on space utilization in the facility. As part of this process, the model integration team evaluates the space remaining in each process area of the facility to determine whether each area has (1) no space/fit challenges, (2) no current space/fit challenges but the potential for challenges in the future as a result of the design being less complete than other areas, or (3) confirmed space/fit challenges, i.e., areas where design changes are necessary to ensure that all equipment will fit into the space allotted to it. The model integration team then prepares a report and briefs senior project management on its findings. According to a UPF contractor document, as of December 2013, 26 process areas had no space/fit challenges, 13 process areas had no challenges but had the potential for challenges in the future, and 2 process areas had confirmed space/fit challenges. NNSA and UPF contractor officials said that, as of January 2014, they were confident that these remaining space/fit challenges can be addressed within the current size parameters of the facility, but that the project will not have absolute certainty about space/fit until the design is fully complete. Instead, the project will only be able to gradually reduce the amount of space/fit uncertainty and risk as the detailed design progresses. However, the officials said that, prior to CD 2/3 approval, the contractor is required to conduct a detailed review of the 3D model to ensure there is adequate space for all equipment and utilities, and NNSA plans to assess the results of this review. Communications. According to an NNSA official, communications between NNSA and the contractor significantly improved after the space/fit issue was identified, and the contractor kept NNSA better informed of emerging concerns and its plans to address these concerns. In addition, NNSA held a partnering session with the contractor in June 2014, which included management representatives from NNSA and the contractor in functional areas such as engineering, nuclear safety, and procurement, and included discussions on defining federal and contractor roles, managing change, and mapping the path forward for the project. On July 15, 2014, NNSA and the contractor signed a formal partnering agreement to enhance (1) clarity and alignment on mission and direction, (2) transparency, (3) responsiveness, and (4) effectiveness in meeting commitments, among other things. The agreement also included a commitment to meet quarterly to discuss progress made toward achieving these goals. NNSA and UPF contractor officials also said that the contractor took steps to enhance communications between working-level employees and senior management and improve its organizational culture after the space/fit issue was identified. For example, the contractor established a Differing Professional Opinion (DPO) process through which employees can raise concerns to project management, began conducting annual surveys of the project’s safety culture to determine the extent to which employees are willing to raise concerns, and formally defined its safety culture policy to conform to guidelines established by the Nuclear Regulatory Commission. According to NNSA and UPF contractor officials, the contractor’s annual surveys showed a steady improvement in employees’ willingness to bring concerns and issues to management since the space/fit issue was identified. In addition, the contractor also brought in additional senior project and engineering managers from outside the UPF project in order to foster greater communication between senior managers and working-level employees. In addition, NNSA recently reorganized its management of major construction projects, including the UPF, resulting in more direct communications between the UPF project office and NNSA headquarters. Specifically, in 2012, the UPF FPD began reporting directly to APM, rather than reporting to NNSA at a relatively low level through the Y-12 Site Office, and NNSA officials said that this new organizational structure has streamlined NNSA’s management of the project by increasing the FPD’s control over project resources and functions, as well as the FPD’s responsibility and accountability for achieving project goals. Management processes and procedures. According to the UPF contractor, it developed formal processes for identifying and tracking the status of major technical and engineering issues. For example, according to NNSA and contractor officials, the contractor implemented a process for tracking the project’s highest-priority action items, as determined by the project’s management team, including certain issues related to space/fit. Specifically, as of January 2014, these items included actions to ensure that technical changes are fully reviewed so that their impact on the project’s design, procurement activities, and construction is understood. In addition, according to UPF contractor officials, the contractor implemented a separate system to track the identification and resolution of significant technical issues during the design process, and any employee can submit a technical issue for inclusion in this system if they believe that it is serious enough to require management attention. After an issue is added to the system, the corrective actions implemented to address it are tracked until they are completed, and technical issues affecting space/fit are placed into a separate, higher-priority category within the system. As of January 2014, there were nine technical issues affecting space/fit in this higher-priority category, and three of those issues had been resolved. For example, in August 2013, the project identified a technical issue in which one processing area did not contain enough space to accommodate the replacement of a component, but the project developed a solution that resolved the issue in October 2013. In addition, according to NNSA and the UPF contractor, the contractor uses a separate system to track the status of non-technical issues that are identified by project reviews. The contractor uses this system to formally assign responsibility for any corrective actions to the appropriate contractor personnel and to monitor the status of each action until completion. In order for a corrective action to be closed out in this system, the personnel responsible for the corrective action must provide evidence of completion. For example, in April 2013, the contractor identified nine corrective actions needed to address a number of the contributing factors for the space/fit issue, and began using this system to track their status. As of January 2014, six of these actions had been completed, two were in process, and one had been cancelled. For example, the contractor was still in the process of reviewing and evaluating the procedure set used for the project to identify any improvements necessary, and the cancelled corrective action—the development of a communication partnering policy between NNSA and the contractor—was replaced by the June 2014 partnering session discussed above. NNSA has also recently begun to share lessons learned from the space/fit issue. This was an original goal of NNSA’s review of the factors that contributed to the space/fit issue, and is consistent with both DOE’s project management order, which states that lessons learned should be captured throughout the course of capital asset construction projects, as well as our prior recommendation to ensure that future projects benefit from lessons learned. NNSA officials said that lessons learned from the space/fit issue had been informally incorporated into other NNSA activities in a variety of ways, to include informing independent project reviews and cost estimates, and led to a broader recognition of the need for increased federal staffing levels to enhance NNSA’s oversight activities on other projects. More recently, the UPF Federal Project Director conducted a presentation on lessons learned from the UPF project, including lessons learned from the space/fit issue, at a July 2014 training session for federal project directors. As we have noted in other work, the sharing of lessons learned is an important element of NNSA’s and DOE’s efforts to better inform and improve their management of other capital acquisition projects. As we reported in December 2013, NNSA estimated that it will need approximately $300 million per year between 2019 and 2038 in order to fund the construction projects it plans to undertake during that time. Documenting the lessons learned as a result of the UPF space/fit issue may help prevent other costly setbacks from occurring on these other projects. We are not making any new recommendations in this report. We provided a draft of this report to NNSA for comment. In its written comments (see appendix I), NNSA generally agreed with our findings. NNSA also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. David C. Trimble, (202) 512-3841 or [email protected]. In addition to the individual named above, Jonathan Gill (Assistant Director), Mike Armes, John Bauckman, Patrick Bernard, Antoinette Capaccio, Will Horton, and Steven Putansu made key contributions to this report. | NNSA conducts enriched uranium activities—including producing components for nuclear warheads—at the Y-12 National Security Complex in Tennessee. NNSA has identified key shortcomings in the Y-12 plant's current uranium operations, including rising costs due to the facility's age. In 2004, NNSA decided to build a new facility—the UPF—to consolidate and modernize its enriched uranium activities. In July 2012, the UPF contractor concluded that the UPF's processing equipment would not fit into the facility as designed, and that addressing this issue—which NNSA refers to as a “space/fit” issue—would cost an additional $540 million. The Fiscal Year 2013 National Defense Authorization Act mandated that GAO periodically assess the UPF. This is the fourth report, and it assesses (1) factors NNSA identified that contributed to the UPF space/fit issue and (2) actions, if any, NNSA and the UPF contractor have taken to address the space/fit issue. GAO reviewed NNSA and contractor documents, visited the Y-12 plant, interviewed NNSA and UPF contractor representatives, and observed the computer model NNSA and the UPF contractor use to track space usage within the facility. GAO is not making any new recommendations. In commenting on a draft of this report, NNSA generally agreed with GAO's findings. In January 2013, the National Nuclear Security Administration (NNSA) completed a review to identify the factors that contributed to the space/fit issue with the Uranium Processing Facility (UPF), and identified a number of factors within both NNSA and the contractor managing the UPF design at that time. NNSA's review identified shortcomings in 1) federal oversight of the project, 2) design integration, 3) communications, and 4) the UPF contractor's management processes and procedures. For example, NNSA determined that it did not have adequate federal staff to perform effective oversight of the project, and that the design inputs for the computer model the contractor used to allocate and track space utilization within the facility were not well integrated. NNSA also found that communications shortcomings occurred because the contractor did not always provide timely notification to the NNSA project office of emerging concerns, and that the contractor's management processes and procedures did not formally identify, evaluate, or act on technical concerns in a timely manner. NNSA and the UPF contractor took actions to address the factors that contributed to the space/fit issue, and NNSA has begun to share lessons learned from the space/fit issue, consistent with both Department of Energy (DOE) guidance and GAO's prior recommendation to ensure that future projects benefit from lessons learned. Specifically, NNSA has taken actions to improve its oversight of the project by increasing federal staffing levels for the UPF project office from 9 full-time equivalents (FTE) in 2012 to more than 50 FTEs as of January 2014. According to NNSA officials, these additional staff enabled NNSA to conduct more robust oversight of the contractor's design efforts than was previously possible. The contractor also took steps to better integrate the efforts of the four subcontractors that are conducting design and engineering work on different elements of the facility. For example, in late 2012 the contractor hired an engineer to integrate the subcontractors' design work and ensure that all design changes were incorporated into the contractor's computer model. The contractor also improved design integration by developing a monthly assessment process to evaluate and report on space utilization in the facility. In addition, according to an NNSA official, communications between NNSA and the contractor significantly improved after the space/fit issue was identified as the contractor kept NNSA better informed of emerging concerns and its plans to address them. The contractor also developed formal management processes for identifying and tracking the status of major technical and engineering issues. For example, the contractor implemented processes for tracking the identification and resolution of both technical and non-technical issues during the design process. In addition, NNSA has recently begun to share lessons learned from the space/fit issue, consistent with DOE guidance and our prior recommendation to ensure that future projects benefit from lessons learned. For example, in July 2014, the UPF federal project director conducted a presentation on lessons learned from the UPF project, including lessons learned from the space/fit issue, at a training session for NNSA federal project directors. |
The operation of U.S. embassies and consulates requires basic administrative support services for overseas personnel, such as building maintenance, vehicle operations, and travel services, among others. Traditionally, these services were provided by State. In 1955, State established the Shared Administrative Support Program under which it provided administrative support services, on a reimbursable basis, to other agencies. The Foreign Affairs Administrative Support (FAAS) system, under which State paid fixed support costs and agencies paid the remaining administrative support costs, was established in 1976. However, FAAS’s cost-allocation processes were opaque and customers felt that fees were not in line with the quality of services received. During the 1980s and 1990s, overseas posts experienced increases in staffing from nontraditional foreign affairs agencies and demand for services. In addition, agencies’ growing dissatisfaction with how the system operated and shrinking resources led, in part, to the establishment of ICASS. ICASS is a performance-based cost distribution system designed to provide quality administrative support services at the lowest cost while attempting to ensure that each agency pays the true cost of its overseas presence. According to the Foreign Affairs Handbook, the system’s four primary goals are as follows: Contain or reduce costs. ICASS seeks, in part, to contain or reduce overall government costs for overseas administrative support services. Service providers and customers are to select the most cost-effective methods for providing services by choosing among competitive alternatives, whether internal or external to the U.S. government. The system’s designers felt this cooperative approach would encourage greater participation by agencies that traditionally operated their own administrative support structures and would ultimately lead to a reduction in duplicative structures; streamlined service provision; and, therefore, savings through the development of economies of scale. Provide quality administrative services and increase customer satisfaction. Under ICASS, the customers and service providers at each post are responsible for agreeing on service standards that define quality, cost-efficient service at that post. The local ICASS Council, comprised of senior managers representing each agency at a given post, is responsible for tracking and evaluating service provider performance in meeting cost and quality standards. Establish a simple, transparent, and equitable cost-distribution system. ICASS Councils are supposed to agree on a transparent method whereby the basis for all post- and nonpost-related ICASS service costs can be shown to and understood by customers and service providers both at the posts and at Washington headquarters. Moreover, a database containing billing, budgeting, and other management information was developed and can be accessed by all participants in the system. ICASS seeks to encourage equity by charging customers their fair share of administrative service costs at posts and by giving agencies a greater voice in how shared administrative services are managed and delivered. Promote local empowerment. Under ICASS, posts were granted more responsibility and authority to manage their resources because posts were seen as best positioned to determine the levels of administrative support needed. Under the previous system, these decisions were made centrally in Washington. However, under ICASS, decisions on the services that will be provided at a post, the methods for providing them, and who will provide them are made at the post by the local ICASS Council. Moreover, posts have the primary role in resolving disputes between customers and service providers. Agencies obtain support services by subscribing to cost centers, which are groups of similar services bundled into larger categories (see app. II). All agencies with American direct-hire staff must subscribe to two cost centers: the Basic Package—services that can only be obtained by the embassy, such as securing diplomatic credentials from the host country— and services provided by the Community Liaison Office, such as providing welcoming and orientation materials, assisting family members with employment opportunities, and helping enroll dependent children in education programs. All remaining cost centers are optional for agencies. Costs of services are distributed among customers enrolled in each cost center either on the basis of the number of people an agency has at post (capitation) or on the amount of service the agency actually uses (workload). In addition, agencies may modify the level of services cost centers provide by taking the full amount, a medium level, or a low level. Agencies selecting medium or low levels of services are charged 60 percent and 30 percent of the full costs associated with the cost center, respectively. ICASS is a two-tiered system based in Washington and at overseas posts that relies on collaboration among multiple agencies to develop and implement ICASS policies (see fig. 1). The Foreign Affairs Handbook details the responsibilities of three Washington-based ICASS bodies. The ICASS Executive Board is the top decision-making authority within ICASS and is responsible for reviewing and making policy and providing leadership in addressing worldwide improvements and cost reductions for administrative services. It also resolves issues and disputes raised by Washington-based or overseas ICASS groups. The Assistant Secretary of State for Administration permanently chairs the Executive Board, and members generally include assistant secretary-level officers from participating agencies. The interagency ICASS Working Group, which is open to all agencies represented on ICASS Councils at overseas posts, is a staff arm of the Executive Board responsible for presenting policy issues to the board, making policy decisions when delegated to do so by the board, resolving issues raised by posts, and reviewing and approving nonpost costs and factors. The ICASS Service Center, an interagency-staffed office organizationally located in State’s Bureau of Resource Management, is primarily responsible for overseeing worldwide ICASS operations, including providing support to embassies and consulates on training, financial, and budgetary matters and general guidance on implementing ICASS. The Service Center also provides support to the Working Group and the Executive Board in developing new policy, but the center has no policy- making authority of its own. Although general ICASS policy is set in Washington, overseas diplomatic posts are responsible for decisions on implementing the system. At the core of operational decision making is the post’s ICASS Council. This is an interagency body consisting of representatives from each of the agencies at the post that receive ICASS services. Representatives to the ICASS Council must be direct-hire U.S. citizen employees and are usually the local head of the agency they represent. A Council Chair elected by the representatives for a 1-year term heads the group. ICASS Councils are charged with developing all local policies on what services will be available at the post; how those services will be delivered; whether State, another agency, or a contractor will provide the services; and how fees are established and customers charged. The councils are also responsible for developing ICASS performance standards for all services provided at their respective posts; for annually reviewing service providers’ performance and customer satisfaction; and for updating standards, as needed. Although consensus building is the preferred mode for decision making, voting is allowed on a one funding-code, one-vote basis. However, agencies that are not subscribed to a specific ICASS service may not vote on decisions that affect that service. Although they are chiefly tasked with overseeing ICASS operations and service delivery, the Deputy Chief of Mission and service provider representatives also participate as ex-officio council members. In this capacity, they provide advice and technical assistance to the representatives but are not authorized to vote on matters affecting the post’s ICASS policies or operations. Locally employed staff, such as foreign nationals, and others may also provide technical assistance to the council, both in terms of making presentations or participating in local working groups assigned to a specific task, but they have no formal role in helping the council achieve consensus on issues. The Chief of Mission—who is usually a U.S. ambassador but could also be a Charge d’Affaires, Consul General, or Director of a U.S. Office (such as in Pristina, Kosovo), depending on the post—retains the ultimate oversight and responsibility for ICASS at overseas posts. In cases where the Chief of Mission vetoes a decision, or implements a decision contrary to the ICASS Council’s desires, the council may appeal the decision to the Executive Board in Washington. ICASS has not resulted in efficient delivery of administrative support services or achieved economies of scale because it has neither eliminated costly duplication of administrative support services nor led to systematic cost-containment measures and the streamlining of operations. From the start of ICASS, many agencies did not sign up for ICASS services and decided instead to self-provide administrative support services, which created duplicative administrative systems that can raise overall government costs. While agencies cited affordability concerns, programmatic needs, and control issues as reasons for not subscribing to ICASS services, we found that they seldom provided detailed business cases that justified decisions to self-provide support services. In addition, neither service providers nor customer agencies have made systematic efforts to contain costs by consolidating or streamlining services. Moreover, ICASS structures designed to encourage and reward managerial reforms are not adequate for overcoming strong disincentives deriving from resource management authorities and parochial interests of both customers and service providers. However, State and the U.S. Agency for International Development (USAID) have recently taken some steps to make the delivery of embassy support services more efficient. When agencies choose not to subscribe to ICASS services, they still have administrative needs that must be filled, which may lead to the establishment of redundant administrative structures at posts. From the very beginning of the program, agencies frequently chose not to take some ICASS services available to them. In fiscal year 1998, the average rate of non-State agencies’ participation in available cost centers ranged from about 31 percent to about 87 percent (see app. III). Decisions to not take ICASS services at the program’s onset may represent missed opportunities to achieve economies of scale. When an agency opts out of a service it needs, it often must provide that service either by creating new positions at the post or securing the service from the local market. This results in a duplication of services—a situation where an agency creates an administrative structure similar to, but apart from, what it could receive under ICASS. There are often defensible reasons for an agency to develop such a structure, such as demonstrated program needs or logistical constraints. Less supportable duplication, however, exists when agencies self-provide services without any apparent demonstrated need. The State Inspector General reported in 2000 that although self-provision rather than subscribing to an ICASS service may save individual agencies money, it can also result in increased costs for agencies that continue taking the ICASS service, as well as for the U.S. government overall. Officials in Washington and at posts said that adjustments to a post’s ICASS personnel are generally not made to compensate for the reduced ICASS workload that occurs when agencies opt out of a cost center. As a result, the ICASS costs associated with that cost center remain the same and must be distributed among a smaller population of subscribers. In addition, overall costs rise due to the new costs associated with the agency’s self- provision of the service. For example, USAID in Dakar recently identified a need to obtain vehicle maintenance services outside the ICASS structure because the location of its new offices in relation to the ICASS vehicle maintenance facility prevented USAID from getting convenient, timely service. As a result, USAID developed and implemented a business plan to contract with a local service station near its offices, which USAID officials expected would reduce their fixed costs for this service from about $21,200 under ICASS in 2003 to about $7,400 in 2004 (see fig. 2). However, although USAID notified the post ICASS Council of its intention to withdraw from the ICASS service, the reason for its doing so, and its general plan to contract with a local vendor for its vehicle maintenance needs, the agency did not provide details on how it would receive the needed services, nor did the council request that information or discuss whether USAID’s new approach could be adopted postwide. Moreover, despite a reduction in the workload associated with 13 USAID vehicles, there was no change in the composition of ICASS staff responsible for vehicle maintenance after USAID withdrew from the service. Thus, the approximately $21,200 for labor and ICASS redistribution charges formerly associated with USAID’s bill would be distributed among agencies that retain their service subscriptions. In addition, labor costs associated with USAID’s newly self-provided service represent increased overall government spending because the agency now pays additional people (i.e., the local vendor) to provide a service it could otherwise receive from existing embassy employees. Thus, total government costs for vehicle maintenance in Dakar would rise by about $7,400. Agency officials in Washington and the field said the most common reasons for not subscribing to a service are the cost of the service, agencies’ unique programmatic circumstances, agencies’ desire to have greater control over services, and a lack of need for some services. Agencies cited two cost-related reasons to seek administrative support outside of ICASS. First, many agencies said that ICASS services are too expensive, in part due to the high labor costs associated with U.S. government employees hired to work overseas, and reported that they could self-provide the same services for less money by hiring local labor. Under ICASS, customers pay the salaries and benefits for both Foreign Service officers and foreign nationals who provide administrative support services. Figure 3 shows that in 2000, labor costs comprised over 60 percent of total ICASS costs. American direct-hire employees comprise roughly 5 percent of ICASS employees but represent 30 percent of the total labor costs. State estimates the average annual cost of maintaining a Foreign Service officer at an overseas post to be about $346,000 per year. Second, agency officials reported that ICASS cost increases have forced them to place greater emphasis on finding savings, including examining the need to continue subscribing to some ICASS services. Total ICASS costs rose 29.4 percent between 2001 and 2003, from $758 million to $981 million, as a result of new security requirements following the terrorist attacks of September 11, 2001; State’s increased hiring of American personnel; new services to be provided; and adjustments to the exchange rate, among other reasons (see fig. 4). As a result, agencies have chosen to subscribe to fewer ICASS services than in previous years (see app. III). Of the 23 agencies located at 10 or more posts in both 2001 and 2003, 21 had lower participation rates in 2003 than in 2001. Participation rate reductions ranged from 1.4 to 6.6 percentage points. In addition, 18 of the 23 agencies paying ICASS fees at 10 or more posts in both 1998 and 2003 had participation rates that were lower in 2003 than in 1998, ranging from 0.7 to 14.1 percentage points. Because of rising costs and budgetary constraints, the U.S. Commercial Service reduced its average subscription rate for all services available at all posts at which it has a presence from 83.8 percent in 2000, one of the highest rates for any agency, to 74.8 percent in 2003. Agencies also cited unique programmatic circumstances associated with overseas programs that require them to self-provide services. For example, Peace Corps officials in Dakar stated that the remote location of Peace Corps volunteers throughout Senegal, combined with the need for staff in Dakar to make routine visits to these remote locations, requires that the office own, operate, and maintain a vehicle fleet separate from the ICASS vehicle service. Similarly, a U.S. federal law enforcement officer in Vienna said that all of his agency’s overseas officers are authorized to maintain a government-owned vehicle because they need immediate access to transportation on a 24-hour basis. In addition, because USAID’s offices in Egypt and Senegal are in locations outside the respective main U.S. embassies, these offices employ staff to provide administrative support services, such as nonresidential building operations. Agencies also cited control as a factor for self-providing services. Some customer agency officials perceived an implicit service delivery bias toward State employees, saying State employees’ needs are placed ahead of others. Although we discovered no evidence—hard or circumstantial— supporting this contention, agencies throughout the eight posts we examined stated that they maintained their own vehicle fleets so they would have immediate transportation access. In addition, unless an ambassador requires all agencies at a post to participate in the furniture pool, the Drug Enforcement Administration (DEA) provides furniture for its American workers outside of ICASS. Officials in Washington said this is because DEA felt there was an implicit bias toward State personnel, both in terms of priority of distribution and furniture quality. Supplying its employees with furniture gave DEA greater control over both these aspects and better met its employees’ needs, according to the agency. Finally, some agencies choose to opt out of a service because they do not actually need the service at post. For example, the Foreign Agricultural Service processes payroll and travel services in the United States for American employees overseas, and the Department of Defense has no need to subscribe to personnel services for local staff in posts where it does not employ foreign service nationals. In addition, some agencies occupy offices provided by host country ministries and thus have no need for services such as nonresidential maintenance or local guard services. Despite the reasons agencies cited for self-providing support services, in our fieldwork, we found numerous cases of duplicative administrative structures that seemed to be unnecessarily redundant. For example, State and USAID operate two separate warehouses on adjacent properties in Cairo, separated by a concrete wall (see fig. 5). Staff from both agencies said the two warehouses could be run more efficiently if they were consolidated, and staff from both agencies said they could take on the work of the other. In Dar es Salaam, Tanzania, USAID and State provide redundant services in 14 ICASS cost centers despite occupying buildings 30 feet apart on the newly built embassy compound. According to post officials, these redundant support structures include shipping and customs, cashiering, human resources, home fuel and water delivery, janitorial services, warehousing, housing/leasing services, motor vehicle operations and maintenance, procurement, travel services, budgeting and financial planning, contracting, and housing maintenance. Furthermore, according to the 2003 ICASS Global Database, USAID was not billed for information management services, International Voice Gateway access, payrolling, and personnel services for American and foreign national employees. Although we did not assess the rationale of each service USAID self-provides in Dar es Salaam, both USAID and State officials acknowledged that some of the services could be consolidated. Officials in Washington confirmed that the above examples are common occurrences worldwide. Agencies seldom engage in a disciplined process for rationalizing decisions to opt out of services, which often limits posts’ ability to benefit from innovative managerial approaches to service delivery. ICASS is a voluntary system, and agencies are not required to justify their decisions for self- providing services they could obtain through ICASS. Although some agencies’ reasons for self-providing services outside the system may be supportable, we found that their decisions to do so are generally made without a disciplined business case based on analyses of alternatives, including how the alternatives affect the individual agency, other agencies at post, and overall government costs. We found that business cases were not made when agencies first opted out of ICASS services when the system began and also subsequently when agencies have withdrawn from services. The Foreign Affairs Handbook states that an agency must notify the post ICASS Council of its plans to withdraw from a service; however, that notification process is not intended as a justification for approval for withdrawing from ICASS services. Rather the notification is designed to ensure that all member agencies benefit from service options that are more cost-effective than existing ICASS services. Issues to be discussed in the notification include the reasons for withdrawing, where and how the agency will obtain the service, whether the council should consider the alternate service source for all member agencies, and any potential cost savings. However, agencies are not required to provide detailed analyses, such as cost-benefit analysis, for these notifications. Although we found that ICASS Councils enforce the notification requirement, they seldom examine agencies’ self-provided services for potential ways to improve ICASS services. In interviews at our case study posts, ICASS Council members said that agencies informed the ICASS Council before ending subscription to an ICASS service, as required, but frequently did not present information beyond the requirements. Furthermore, ICASS Councils at the posts we visited did not seek information on whether agencies’ service arrangements outside of ICASS could be adapted for use by the rest of the ICASS customers at post. Without such explanations and discussions, posts may have missed opportunities to improve existing ICASS services or adopt more cost- effective alternatives. ICASS seeks to encourage elimination of redundant administrative support services and to contain costs through innovative managerial approaches to service delivery that could lead to economies of scale. However, we found that few systematic efforts to consolidate duplicative administrative structures or streamline administrative processes have occurred at either the postwide or worldwide level. Of the eight posts we examined, Embassy Vienna has taken the most proactive approach to streamlining services. In recent years, the post has made numerous efforts to streamline services, including reducing the number of vehicle mechanics, revamping warehouse operations, changing processes for procuring administrative supplies, upgrading and changing utilities contractors, competitively sourcing the in-house upholstery operation, reducing the travel services contract to 20 hours per week and moving that office off the compound, and establishing a furniture pool in which each agency in Vienna voluntarily enrolled. Embassy officials also reported services in 15 ICASS cost centers that could be wholly or partially outsourced. Other posts we examined also conducted efforts to consolidate services—for instance, Embassy Lima made changes in how it delivers telephone and some maintenance services and discovered a way to reduce electricity bills by 7 percent—but these efforts generally focused only on one or two services at the post, rather than a more systematic approach like that taken in Vienna. One area with great potential for consolidating and streamlining operations is in the planning for New Embassy Compounds (NEC). In response to the 1998 bombings of U.S. embassies in Dar es Salaam, Tanzania, and Nairobi, Kenya, State embarked on a $21 billion program to replace about 185 embassies and consulates. The size and cost of building an NEC is directly related to the number of staff set to occupy it and the type of work they will perform. According to State, per capita building costs average about $209,000 per office for space for top embassy management, $59,300 per office in controlled access (or classified) space, $28,100 per office in noncontrolled access (or nonclassified) space, and $4,900 per person for nonoffice space. In 1999, a law was passed requiring that all U.S. agencies working at posts slated for new construction be located on the new site unless they are granted a special waiver. Although in the past there were logistical reasons for agencies to self-provide support services “off compound,” justifications based on proximity have less weight as agencies become colocated on the new compounds. In April 2003, we reported that staffing projections for NECs were developed without a systematic or comprehensive rightsizing approach—assessments of the security environment; mission requirements; cost of operations; and potential rightsizing options, which would include consideration of consolidating and streamlining administrative support operations. Following our report, State implemented a formal process with criteria for developing, vetting, and certifying staffing projections for NECs. The new process requires posts to review all positions under Chief of Mission authority, including administrative support, even if they are not colocated in the embassy or consulate at the time projections are made. Considering the high costs associated with constructing new embassy compounds, the staffing projection process is an opportune time for posts to examine administrative platforms. In addition to reducing annual U.S. government expenditures for support services, consolidating and streamlining services at this stage would likely reduce the overall costs of embassy construction because such actions would result in reduced office space needs in the NEC. Four of our eight case study posts have either recently completed construction of an NEC (Embassies Dar es Salaam and Lima), begun constructing an NEC (Embassy Conakry), or are in the planning stage for an NEC (Embassy Dakar). Officials at the first three posts indicated there was no discussion, or they were unaware of discussions, of consolidating or streamlining administrative support services when developing staffing projections for the new compounds, although at the time their respective projections were due, no formal guidance or requirements existed for what posts should include. Nonetheless, these posts may have missed opportunities to minimize construction costs for their new compound. Furthermore, during our December 2003 site visit to Dakar, officials indicated that consolidation of duplicative administrative services has not been considered in planning for the new NEC despite the fact that most agencies are or will be colocated on the new compound. During our work, we found that deterrents to consolidating and streamlining operations outweighed the ICASS structures and tools designed to encourage innovative managerial reforms. Among these deterrents were the ICASS Councils’ lack of authority to fully manage ICASS resources, as well as service providers’ and customers’ focus on their own interests rather than the collective interests of the agencies at post. Further, tools such as the ICASS Working Capital Fund and a formal ICASS awards program did not work as envisioned and thus did not provide sufficient impetus for consolidation and streamlining efforts. The Foreign Affairs Handbook states that ICASS Councils are responsible for determining “which services are to be provided, by whom, and at what level,” and for evaluating cost and staffing alternatives and establishing budgets for posts’ ICASS operations. However, according to the Director of the ICASS Service Center, there are no “ guidelines, rules, or regulations stating that ICASS Councils set staffing levels of the service provider.” Indeed, agency headquarters and field staff agreed that while they have input on whether an existing position is staffed, they do not have input on actually setting the number of ICASS positions at a post. As a result, the agency providing services determines the staffing complement needed to deliver the services. This seeming contradiction to ICASS councils’ authorities was designed, in part, to minimize micromanagement by the local councils. Nonetheless, it reduces a council’s ability to streamline ICASS operations and manage the largest potential source for savings—labor costs. For example, an ICASS Council could decide to outsource an ICASS service, yet it would have no authority to adjust ICASS personnel to reflect the changed in-house labor needs for that service. Rather than the cooperation the developers of ICASS envisioned, both service providers and customer agency personnel focus primarily on their own interests. Reforms that reduce the costs of administrative support structures, whether streamlining practices or consolidating services to a single provider, should lead to reductions in staffing levels. However, we found that service providers are reluctant to implement reforms that would reduce ICASS staffing levels. Officials said that reforming administrative support operations requires significant time and effort that administrative officers at posts said they often do not have. Moreover, administrative officers at posts reported that there are few incentives to reduce ICASS costs, and that few rewards come to those making administrative structures more efficient. As a manager at one of our case study posts succinctly put it, “You don’t get ahead by firing people and making waves.” Customer agency personnel also focus on self-interests. Faced with budget constraints and rising ICASS costs, agencies have been forced to discover ways to reduce spending. In some cases, agencies’ first choice has been to opt out of ICASS services, either on orders from their respective Washington headquarters or because of decisions made locally. For example, to save money, the U.S. Commercial Service in Vienna has withdrawn from numerous cost centers since 1998, including those for budgeting and fiscal (1998), information technology support (1999), administrative supply and vehicle maintenance (2001), International Voice Gateway telecommunications (2002), and American personnel services (2003). In other cases, agencies do try to work under the ICASS rubric; but because they cannot fully engage in resource management, they become frustrated and consider opting out. For example, in Dakar, USAID has proposed pilot testing a new method for delivering residential maintenance services, but it has been unsuccessful in gaining approval to conduct the pilot test. Although USAID has not yet made a decision to withdraw from that cost center, officials in Dakar expressed frustration over the high costs associated with residential maintenance and indicated that withdrawal from the service could be an option. In addition to agencies’ self-interests, personal interests of post personnel sometimes hinder reform efforts, particularly those related to streamlining processes. At Embassy Bern, post management reported suggesting that the American staff get local bank accounts and/or automatic teller cards, which they said would have the dual effect of reducing costs associated with check cashing—$17 per check in Bern—and allowing the current cashier to be trained for work in other services that are understaffed. Post officials stated, however, that customers resisted changing the service because it would require them to leave the embassy to cash a check. As a result, the post missed chances to reduce ICASS costs and improve service quality by cross-training staff. ICASS requires that post councils and service providers work together to choose the most cost-effective method for delivering services. This requirement was designed to ensure selection of the best methods for delivering services by examining all available competitive alternatives, including those developed or adopted by customers who self-provide services they could otherwise obtain through ICASS. In theory, this requirement would lead to the most efficient delivery of ICASS services because it would be in the interest of both customers and service providers to discover the least expensive method for delivering services at the levels needed by the post. However, as previously noted, post ICASS Councils have not systematically considered the service options available to them. Some post officials reported that program requirements demand too much of their time to conduct analyses showing how the embassy as a whole would benefit from new approaches to service delivery. Moreover, only a few agencies other than State have the capacity to actually provide services to other agencies, and only one agency other than State, USAID, actually does this on a very limited basis. The Working Capital Fund is a no-year fund that permits posts to retain a portion of their unobligated funds from one fiscal year to the next. This tool allows posts some fiscal flexibility by reducing the pressure to engage in wasteful end-of-year spending on items they may not need. It provides ICASS Councils with an opportunity to engage in long-term planning and have greater autonomy in allocating resources—factors that were expected to ultimately lead to greater efficiencies. Although some of the posts we visited did roll over some funds from one year to the next, post officials said they were afraid they would lose an equivalent amount of money in future years if they demonstrated they could save in the current year. As a result, posts prefer to spend their entire budget within the fiscal year it is disbursed. In technical comments on a draft of this report, the ICASS Executive Board stated that it was unaware of any case in which carried- over funds were withdrawn from a post because it “actively supports posts carefully stewarding and planning for the best use of funds.” However, the Executive Board did acknowledge that future funding targets could be adjusted downward for posts that carry over significant funds so that money could be redirected to other underfunded posts. Customers and service providers stated that the program designed to reward individuals and posts for developing innovative approaches to service delivery does not overcome the disincentives previously described. The ICASS Service Center has three annual awards for contributions that lead to improved quality of service and/or greater efficiencies. The ICASS Outstanding Leadership Award recognizes contributions from individual post employees who best acted as agents for change to improve the quality of services and/or reduce costs at overseas posts. The ICASS Team Achievement Award goes to the one team worldwide that best improves service delivery and customer satisfaction and/or achieves cost savings. Finally, the Diplomatic Readiness Goal Sharing Award rewards one or two teams worldwide for establishing new goals that improve a post’s capacity to achieve U.S. objectives. Despite the stated purposes of these awards, we found that they did not motivate overseas staff to seek innovative approaches for delivery of ICASS services. Results from a global survey conducted by the ICASS Service Center in 2002 showed that the rewards system did not meet service providers’ and customers’ expectations. Moreover, State and agency officials reported that the awards program does not motivate their staff to seek innovative methods for delivering administrative support services. Customers and providers agreed that the success of ICASS at a post was highly personality driven, and that innovative reforms derive from individuals or teams interested in reducing costs or improving services, rather than from the potential to receive an award. Recently, State and USAID initiated an effort that could greatly affect ICASS service delivery and costs, and State began three other initiatives that could have significant impacts on ICASS. Two of the efforts, a study of the potential for consolidating support services at four overseas posts and implementation of a tool to help rationalize service delivery, were generated specifically to make service delivery at posts more efficient. The remaining two approaches, centralizing administrative functions and sharing the costs of embassy construction, were generated outside of ICASS but could have significant ramifications for costs under the system. In November 2003, State and USAID reached an agreement to examine consolidation of duplicative administrative functions at four posts: Embassies Cairo, Dar es Salaam, Jakarta, and Phnom Penh. The goal of the study was to “identif and eliminat wasteful and/or unnecessary duplication wherever…improved service and/or cost savings accrue to both agencies.” In May 2004, State and USIAD issued their report stating that they found “significant advantages in consolidating motorpools, warehousing/property management, residential maintenance, and leasing at every post” and that in every case, consolidation would improve services and reduce costs. The reports recommendations are currently being implemented. Another effort involves bringing embassies’ administrative support services into compliance with quality management principles developed by the International Organization for Standardization (ISO). These principles, known as ISO 9000, were developed with the goal of ensuring that an organization’s products or services satisfy a customer’s quality requirements and comply with any regulations applicable to those products or services. The ISO 9000 principles, which apply to both for-profit and nonprofit organizations, stress customer focus; detailed documentation of processes, including specific and quantifiable performance criteria; and continuous tracking of performance and improvement in systems. Five embassies—Brussels, Cairo, London, Vienna, and Warsaw—were selected for a pilot study on applying ISO 9000 quality management principles and achieving ISO 9000 certification. We believe this certification has the potential to lead to significant cost reductions for ICASS because it would require service providers to focus on quality and timely service delivery and to eliminate inefficient practices. Moreover, it would require that ICASS service providers and ICASS Councils rationalize staffing levels—the primary costs associated with service delivery. State officials believe ISO 9000 certifications would, in the long term, provide an incentive for consolidating duplicative services because as unit costs decline, agencies would become more amenable to subscribing to support services that were less costly than those they self-provide. State also has begun an effort to centralize functions that are not location- specific to regional centers in the United States and abroad. Although this effort evolved from the rightsizing initiatives in The President’s Management Agenda, it could also significantly reduce ICASS costs and consolidate delivery of ICASS services. State plans to begin this effort at posts within the Bureau of Western Hemisphere Affairs by relocating some administrative support activities to the Florida Regional Center in Fort Lauderdale. State estimates that up to 90 American direct-hire positions could be removed from overseas posts at a savings of as much as $140 million over the first 5 years of the effort. These cost savings would be passed directly to other agencies in the form of lower ICASS bills. State officials said that if this pilot program works well in that bureau, State would consider expanding the effort to other regions. State and the Office of Management and Budget (OMB) have recently proposed a new program that would require agencies with overseas staff to help finance the cost of the embassy construction program. The Capital Security and Cost-Sharing Program, if implemented, would require agencies to share construction costs based on the per capita proportion of total overseas staff and the type of space (controlled access, noncontrolled access, or nonoffice) they need. As a result, non-State agencies would be required to share about $61 million in costs in 2005, about $147 million in 2006, and about $233 million in 2007 (see table 1). Moreover, costs for constructing office space designated for ICASS service providers would be distributed among agencies on the basis of their respective proportions of total ICASS expenditures for the year. Agencies’ ICASS-related contributions for sharing construction costs are estimated to total about $23 million in 2005, about $46 million in 2006, and about $68 million in 2007. By 2009, non-State agencies would share about one-third of the estimated annual $1.4 billion construction fund. These charges are in addition to fees that agencies pay under ICASS. OMB officials believe the new capital cost sharing requirement will spur all agencies, including State, not only to scrutinize staffing for their program needs but also to consolidate duplicative administrative structures and develop creative ways to deliver support services. However, another possibility is that agencies could withdraw from ICASS services at increasing rates, as they have done since 2001, to compensate for their increased costs. Based on the system’s primary goals, ICASS is generally effective in providing quality administrative support services, although not to the extent that it could be if certain impediments were addressed. Global surveys and interviews at case study posts show that agencies generally approve of the quality of ICASS services; but because customer satisfaction is not routinely tracked by ICASS Councils at posts, it is difficult to determine the extent to which customers are satisfied. We found that ICASS is simple and transparent enough for customers to understand the basic structures that govern service provision at post. Furthermore, virtually all personnel involved in setting policy or implementing ICASS at posts and in Washington agree that the system is more equitable than the previous cost-distribution mechanism for overseas administrative support services. However, it is difficult to determine the extent to which ICASS is meeting its stated strategic goals because they lack indicators to gauge progress. Moreover, posts rarely implement a requirement to annually review service performance standards. Other obstacles to maximizing the system’s effectiveness include limits to overseas staffs’ decision-making authority, which can weaken ICASS’s goal of “local empowerment.” Finally, we found that available training and informational resources that could enhance participants’ knowledge and implementation of ICASS are underutilized. Results of a global ICASS survey indicated that customers are generally satisfied with ICASS services. In 2002, the ICASS Service Center surveyed the ICASS Executive Board and Working Group members, State Department Regional Bureaus, service provider personnel, post ICASS Council members, Chiefs of Mission, and Deputy Chiefs of Mission. Responses showed that ICASS customers generally agreed that ICASS facilitates efforts to improve the quality of life and work at posts. Further, in 24 of 25 service areas, customers reported that the Service Center was generally effective in meeting its performance standards. However, the Service Center survey’s response rate was about 42 percent, which limits the degree to which these results are generalizable. ICASS customers at our case study posts typically confirmed the survey results, stating that they were generally satisfied with the overall quality of the ICASS services they receive. Some customers said ICASS provided better services than they could provide themselves. Others stressed that, although they had specific complaints about services, they were pleased with the overall service quality. We found that customer complaints about service quality were generally the result of unique cases or circumstances regarding a specific service at an individual post. Moreover, customers reported that in cases where they had complaints, they generally knew where to get solutions and that corrective measures were generally implemented quickly and to their satisfaction. Customers at our case study posts rarely cited poor service quality as the reason to consider withdrawing or to actually withdraw from a service. Although we found that customers are generally satisfied with ICASS services, quantifying customer satisfaction is difficult because post ICASS Councils are not maximizing the use of annual local customer satisfaction surveys. We found that not all post ICASS Councils administer regular customer satisfaction surveys, as recommended by the ICASS Service Center. A global survey conducted by State in 2001 said that 32 percent of 56 posts responding had not performed a customer satisfaction survey in at least 3 years. Although all but two of our case study posts reported administering at least one customer satisfaction survey in the last 3 years, only one post reported that the ICASS Council had input in the creation of its post’s surveys. Most surveys were conducted unilaterally, either by the management team or a specific management office. Some customers said these surveys failed to accurately measure customer satisfaction because survey questions did not provide them with an opportunity to express their real concerns or because customers did not think the surveys would lead to service improvements. In addition, while State’s global survey reports that 61 percent of respondents said service had improved, only 38 percent reported they had actually measured improvements. Based on interviews with customers and service providers at post, we found that most understood the basic ICASS structures and that ICASS therefore generally meets its goal of being a simple and transparent system. Most customers demonstrated that they generally understood which administrative support services they received from ICASS and which services they did not receive because of their respective agencies’ subscription choices. Customers also said they generally understood how bills were calculated and how costs were distributed at a basic operational level. Service providers generally understood which agencies had subscribed to the services. However, customers were largely unaware of their roles and responsibilities as post ICASS Council members and how to effectively utilize their authority to improve ICASS operations at posts. Some council members told us the ICASS Councils at their posts did not deal with issues with which they thought they should be dealing, such as how to contain and reduce costs. At three posts that held local ICASS Council meetings during our site visits, we found that discussions focused on routine ICASS tasks, such as reviewing an individual agency’s billing questions, that would be better discussed in other forums. For example, in Cairo, part of one ICASS Council meeting addressed why one agency’s housing maintenance bill was so high. After some discussion, the council chairman and a financial specialist agreed to meet with the council member after the meeting to resolve the issue. ICASS customers typically said that ICASS implementation is generally equitable, but we found that some potentially inequitable policies still exist. Customers agreed that the system was more equitable than its predecessor, the FAAS system. Customers from some agencies with whom we spoke said that under ICASS, they paid for few, if any, services they did not use. In addition, service providers told us that, under ICASS, they know which ICASS customers subscribed to their service and could ensure that customers generally received only the services for which they paid. Some service providers noted, however, that it was difficult to deny a nonsubscriber’s request for help, and some said that they occasionally provided some services to nonsubscribers. Medical services staff, for example, said they were professionally obligated in some cases to serve embassy staff and dependents, whether or not they were signed up for medical services. ICASS customers who paid for these services did not complain about such cases. ICASS customers also said that ICASS costs and services were equitably allocated among the customers taking services at posts. Special arrangements whereby individual agencies received services at a different cost than other agencies at posts were common under FAAS. Such side deals are not allowed under ICASS, and we found no evidence of them occurring. ICASS permits service providers to directly charge any agency for using a service that can be easily identified as benefiting that specific agency, and some customers confirmed that this occurred. Nonetheless, agency staff at posts reported perceptions that service provision was not always equitable. Some customers told us they believed that State employees received preferential treatment in both the quality and priority of service because ICASS employees report directly to State management officers. Although we found no evidence to substantiate these allegations of systematic preferential treatment, the perception of bias affected customers’ morale. Other equity issues involve the methodology for distributing costs generated by temporary duty and regional ICASS staff. At the posts we visited, costs incurred by temporary duty personnel were typically distributed among all ICASS customer agencies at a post, rather than just the agency sponsoring the temporary duty staff. Although the ICASS Executive Board approved a new policy that details how posts may charge temporary duty staff for these incurred costs, fewer than 30 posts have implemented policies worldwide. In addition, some costs associated with ICASS staff providing regional services are borne solely by the “home” post. For example, the regional medical staff based in Vienna, Austria, serves several posts, yet the service costs are paid by agencies in Vienna. Agencies with offices in the Balkans but not in Vienna, such as USAID, receive benefits from these services. Some agency staff said such situations were inequitable since agencies were receiving benefits for which they did not pay. In technical comments on a draft of this report, the ICASS Executive Board stated that this inequity is being addressed, citing four posts—Embassies London, Vienna, Pretoria, and Singapore—that have successfully petitioned to have costs for medical evacuation services distributed on other than a home post basis. However, although this is a costly service, it was only one of the many services provided by the regional medical units at these posts where the costs are borne solely by the home post customers. A chief barrier to effective implementation of ICASS derives from the lack of measurable goals and performance indicators. ICASS is consistent with the approach set forth in the Government Performance and Results Act, which requires that most agencies (1) establish 5-year strategic plans, (2) set measurable performance goals in annual performance plans, and (3) annually report on performance toward achieving the performance goals. Annual performance plans should provide direct linkages between the agencies’ strategic plans and their day-to-day activities. As previously stated, ICASS has four strategic goals, and although progress toward achieving them could be measured, the system’s designers did not set clearly defined and measurable performance goals and how progress toward achieving those goals would be assessed. For example, the Foreign Affairs Handbook states that ICASS is to be an equitable system, and defines “equity” as agencies paying “their fair share of post administrative costs based on usage.” However, the handbook does not provide specific, measurable indicators by which progress toward achieving the goal would be monitored and evaluated. Moreover, annual reviews of progress toward achieving ICASS strategic goals have not been conducted. As a result, it is difficult to state whether ICASS as a system is accomplishing what it set out to do: establish an efficient, fair, and effective cost-distribution system. The Foreign Affairs Handbook also states that the ICASS Council and services providers at each post cooperate to set standards for administrative services so that service provider performance can be monitored. The handbook states that these performance standards should be specific, measurable, achievable, relevant, results-oriented, and time- specific and that performance should be evaluated each year. Although all posts we examined had adopted performance standards, providers’ actual performance was not annually assessed against posts’ ICASS performance standards. The handbook states that ICASS Councils should monitor service providers’ “overall performance against agreed upon standards” and provide “an annual written assessment on the quality and responsiveness of the services furnished by the service provider to the customer, using the agreed upon service standards as the performance yardstick.” Councils should also routinely review standards to ensure that they remain relevant. ICASS Service Center officials said that few ICASS Councils either reviewed or updated standards on a routine basis, and we found that none of the eight posts we reviewed conducted full assessments of performance against the standards. At some posts, the service providers did conduct customer satisfaction surveys; however, these surveys do not assess whether service providers achieved the standards. We did, however, find that some of our posts had reviewed the relevance of their standards in recent years. Embassies Vienna and Dar es Salaam last updated their standards in the past year, while three others last updated standards in 2001, and 1 in 2000. During our fieldwork, Embassies Conakry and Lima, indicated they had begun efforts to revise their standards, which had not been updated in several years. A further impediment to maximizing ICASS’s effectiveness is that local empowerment, granted to allow posts the ability to manage their resources through the ICASS Councils’ decision-making authority, has not been fully exercised. We observed that decisions made by ICASS authorities were at times subordinated to decisions by other authorities. We also found that, although the system was designed to give local ICASS Councils a wide range of responsibilities to ensure cost-effective use of resources, many council representatives were reluctant to actively participate in ICASS decision making. The ICASS governance structure at times comes into conflict with other authorities, resulting in a loss of its power to make decisions. For example, one U.S. ambassador required that all agencies at post that wanted to reside in post-owned housing would also have to participate in the furniture pool. Discussions at two ICASS Executive Board meetings indicate that agencies were concerned because they would be required to subscribe to a voluntary ICASS service—the furniture pool—to receive another service—embassy housing—that had never come under the ICASS structure. Moreover, agencies were anxious that this action could be a precedent for State to link other voluntary ICASS services to either the two mandatory ICASS services (see app. II) or other non-ICASS services. A State official said that on appeal, the ICASS Executive Board voted to overrule the ambassador, but the board’s chairman said that as State’s representative to the board, he would advise the Secretary to support the ambassador. In addition, agency representatives reported that post management can be unwilling to allow councils to explore alternatives for service delivery. For example, post management at one of our case study posts was reluctant to support an agency’s feasibility study on potential cost-efficient options to deliver services, citing security concerns. This unwillingness discouraged the customer agency from seeking innovative ways to reduce ICASS costs and improve services. Agency officials in Washington agreed with our observation that council members who make proposals often face an unreceptive environment. As a result, few council members feel motivated to seek reforms in service delivery. Officials from both State and customer agencies commented that local empowerment is sometimes not fully exercised because council members feel that the big issues are out of the post’s control. For example, the methodologies for determining how ICASS services will be charged are defined at the Washington level among agencies, and some officials said there is very little flexibility for posts to adapt them to local needs. In addition, overseas employees, including State personnel, receive demands from, or can be overruled by, Washington headquarters, which limits their autonomy to make decisions that reflect the needs and circumstances at post. For example, of the 467 instances that agencies withdrew from services between 2000 and 2002, agencies reported that about 24 percent of the time it was because their respective headquarters directed them to do so. Officials at the posts we examined stated that headquarters also frequently pressured them to reduce costs without explicitly directing them to withdraw from specific services. Another barrier to local empowerment is the reluctance by some agency representatives to assume ICASS responsibilities. In addition to the organizational disincentives discussed in the previous section, some post staff indicated the amount of time it takes to actively participate more fully in ICASS would compete with the time available for their primary programmatic responsibilities. For example, some agency representatives have regional responsibilities that require spending much of their time at other posts, which limits their time to become involved in ICASS decisions. In addition, some agency representatives expressed a lack of interest in getting involved. As a result, many agency representatives participate in the decision-making process only by reviewing their agency’s ICASS bill. Numerous sources of information dedicated to ICASS policies and program guidance—such as Washington- and post-based training and a Web site maintained by the ICASS Service Center—exist for customers and service providers. However, we found that few individuals make full use of these resources to gain the knowledge base that would help them implement ICASS most effectively. The failure to make full use of information resources, particularly training, limits local ICASS Council effectiveness because representatives have varying degrees of understanding and acceptance of their roles and responsibilities in council decision making and about the mechanisms by which ICASS operates. Moreover, the staff primarily responsible for day-to-day ICASS operations seldom received detailed training on the system. The Foreign Affairs Training Center provides two ICASS training courses for State and other agency staff. The “Executive Seminar” provides agency representatives with a general understanding of ICASS and their roles and responsibilities, and “Working with ICASS” offers more in-depth training targeted at both service providers who make daily use of the system and customers who want more detailed knowledge of how the ICASS system works. All of State’s management officers are required to receive at least some ICASS training prior to deployment overseas. However, most non- State employees are not required to take either of the training classes. In fact, only five customer agencies—the Defense Security Cooperation Agency, the Foreign Agricultural Service, the U.S. Commercial Service, USAID, and DEA—reported requiring that at least some of their overseas officers receive ICASS training prior to an overseas assignment, and staff from the first four of these agencies were the most consistently active customer representatives on the ICASS Councils at the posts we visited. However, we found that the representatives from most other agencies had not taken or been provided the opportunity to take the recommended training and, as a result, were required to learn their duties while “on the job.” Most agency personnel responsible for overseeing their agencies’ participation spend only a small amount of their time dealing with ICASS issues—sometimes as little as 2 or 3 hours per month. ICASS Service Center officials expressed concern that personnel going overseas without the benefit of training would need significantly more time to learn how to work within the program’s sphere of activities than those who had received training prior to arriving at post. The ICASS Service Center also developed a post-specific curriculum. This training is available to agency representatives, local ICASS staff, and other officials who might not otherwise get ICASS training. The training is centered on circumstances specific to the post so that staff may gain a better understanding of how to apply ICASS principles and procedures. Service providers at posts that had received this training felt that training local Foreign National employees is important because the local staff are responsible for the system’s day-to-day operations at post, and they would likely continue to be employed at the post long after the American employees rotated to other posts. In Lima, which had post-dedicated training just prior to our site visit, we found both providers and customers were energized to put what they had learned into practice. The ICASS Service Center confirmed our observation, saying that Foreign National employees seemed especially appreciative of the opportunity to receive this training. In addition to the training it offers, the ICASS Service Center maintains a Web site, www.icass.gov, which is a source of historical and current information on policy guidance, procedures, best practices, training opportunities, staff contacts, budgets, and meeting minutes of the ICASS Executive Board and the Washington ICASS Working Group. We found this site to be a useful source of information, yet many overseas staff, both service providers and customers, were unaware of this resource despite it being advertised through numerous media—cables, listservs, chat rooms, and departmental notices, among others. The U.S. Government annually spends nearly $1 billion and employs approximately 18,000 Americans and foreign nationals to provide administrative support services for embassies and consulates. In the current fiscal environment, it is essential that all U.S. agencies look for ways to contain spending. ICASS was designed, in part, to the contain costs of overseas administrative services. However, the system has not achieved that goal because it has not led posts to eliminate unnecessary duplication or to reengineer the processes by which they deliver administrative support services. Although there are many supportable reasons for an agency to self-provide services, we saw many instances where decisions to do so did not appear to be based on valid business cases or other factors that led to clearly demonstrated benefits. We also saw few instances of posts systematically reviewing service delivery or searching for alternatives that could make service delivery less costly, such as contracting for services with local vendors, placing greater reliance on regionally supplied services, making better use of technology, and systematically considering “best practices” developed and implemented by others. Consolidation and streamlining did not occur because implementing innovative reforms required great personal effort to effect a change in the status quo. As a result, U.S. taxpayers are supporting costly and unnecessarily duplicative administrative structures at overseas posts. Moreover, deficiencies in the ICASS mechanism itself inhibit service delivery efficiency. Despite the existence of at least three types of available training, posts’ agency heads and ICASS Council representatives frequently do not know their roles, responsibilities, and authorities as decision makers and operators of the system, and staff providing service frequently have not received levels of training that would allow them to truly understand and run the system more efficiently. In addition, customers have few mechanisms by which they can hold service providers accountable, and those that are available have often been ineffectively implemented. To ensure more efficient delivery of embassy administrative support services, we recommend that the ICASS Executive Board take the following five actions: The board should aggressively pursue the elimination of duplicative administrative support structures at U.S. overseas facilities with the goal of limiting each service to the one provider that local ICASS Councils have determined can provide the best quality service at the lowest possible price. This effort should include encouraging agencies not subscribing to ICASS services to submit detailed explanations (business cases) of how they will fulfill these service needs and at what cost so that potential benefits can be shared by all ICASS customers at post and ensuring that the consolidation and streamlining of support services are key factors when posts develop staffing projections for new embassy compounds, as required by State. The board should work to contain costs by reengineering administrative processes and seeking innovative managerial approaches through competitive sourcing, regionalization of services, improved technology, and adoption of other best practices developed by agencies and other posts. The board should also consider developing independent teams to review ICASS operations at overseas posts and to recommend and implement reforms that reduce duplicative administrative structures and contain costs. The board should develop strategies to improve the system’s accountability, which could include clearly defining the long- and near-term goals and objectives of ICASS, developing measurable indicators to track performance, and presenting annual reports on the progress toward achieving the goals and objectives; ensuring that post ICASS Councils annually evaluate service provider performance and customer satisfaction and annually certify that performance standards are relevant, specific, and accurately reflect customer needs; and requiring that post ICASS Councils annually certify that they have sought opportunities to streamline and consolidate ICASS services by implementing best practices developed either by local staff or other posts. The board should ensure that all personnel responsible for implementing ICASS operations at overseas posts receive detailed training on their roles, responsibilities, and authorities, including detailed customer service and other technical training for Americans and foreign nationals responsible for the actual delivery of services. We are making our recommendations to the ICASS Executive Board because ICASS is an interagency operation that relies on the collective input of affected agencies. As such, the Executive Board must approve decisions that affect ICASS policies and operations. We received written comments on a draft of this report from the ICASS Executive Board and nine agencies that are primary participants in ICASS—the U.S. Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, State, and the Treasury; the U.S. Agency for International Development; and the U.S. Peace Corps. Their comments, along with our responses to specific points, are reprinted in appendixes IV- XIII. The board and agencies also provided technical comments, which we have incorporated throughout the report where appropriate. The ICASS Executive Board agreed with the report. The board indicated that it met several times in recent months and has decided to take a more active role in the overall management of the ICASS system. It said it is trying to eliminate duplicative administrative support structures where possible and cited a recent State/USAID Shared Services Study, which ICASS partially funded, that reviewed support services at several posts and concluded that consolidating some services could save costs and improve quality. The board also endorsed efforts to reengineer business processes, citing State Department efforts to centralize certain support operations at regional support centers in Bangkok, Thailand; Paris, France; Frankfurt, Germany; Ft. Lauderdale, Florida; and Charleston, South Carolina. The board also agreed that strategies must be developed to improve ICASS accountability. Finally, the board noted that cost management is a priority. The U.S. Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, State, and the Treasury; the U.S. Agency for International Development; and the U.S. Peace Corps generally agreed with our recommendations. State stressed the importance of eliminating wasteful duplication. In addition, State defended the cost structure of ICASS and criticized other agencies for resisting actions such as investments in technology, which State believes could reduce costs. In contrast, comments from the other agencies focused on the high costs of ICASS support services, saying that ICASS had failed to contain costs. These agencies generally believed that our draft report was too focused on duplication and did not place sufficient emphasis on the need to contain costs. They argued that the voluntary nature of ICASS needed to be retained so that each agency can determine what support services it requires and how to obtain them in the most cost-effective way. In addition, the agencies provided their perspectives on a variety of ICASS issues, including training, system fairness, and transparency. Based on these comments, we modified our report to clarify that elimination of duplication and the containment of costs were equally important. We believe that implementation of our recommendations will help the executive branch achieve economies of scale by reducing duplication and contain costs by focusing on streamlining business practices. We generally support the voluntary nature of ICASS participation because agency needs differ. We also understand that some agencies choose not to use some ICASS services because they believe they can obtain these services elsewhere at less cost. However, we believe such decisions should be supported with strong business cases. We are sending copies of this report to interested congressional committees. We are also sending copies of this report to all current members of the ICASS Executive Board, including the Secretaries of Agriculture, Commerce, Defense, Homeland Security, State, the Treasury, and Veterans Affairs; the Attorney General; the Administrator for the U.S. Agency for International Development; the Commissioner of the Social Security Administration; the Director of the U.S. Peace Corps; the Director of the Office of Management and Budget; and the Librarian of Congress. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix XIV. To respond to both objectives of our review—whether the International Cooperative Administrative Support Services (ICASS) system has led to efficient delivery of administrative services and whether ICASS is an effective mechanism for providing quality services—we conducted fieldwork and reviewed documentation in Washington, D.C., and at eight posts worldwide. In Washington, we reviewed ICASS policies and procedures outlined in the Foreign Affairs Handbook; reviewed documents and interviewed Department of State (State) officials from the Bureaus of Administration, Medical Services, and Overseas Buildings Operations, six geographic bureaus, the Offices of Management Policy and Rightsizing, and the ICASS Service Center; attended meetings of the ICASS Executive Board and the ICASS Working Group; participated in ICASS training at the Foreign Affairs Training Center in Arlington, Virginia; and reviewed documents and interviewed headquarters officials from the U.S. Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, and the Treasury, as well as from the Office of Management and Budget, the U.S. Peace Corps, and the U.S. Agency for International Development (USAID). In addition, we conducted data analyses using data from the ICASS Global Database, which was developed and maintained by the ICASS Service Center and contains information for each ICASS cost center at each overseas post on service subscription, workloads, billing, service withdrawal, and other information necessary for operating the system. To assess the reliability of the ICASS data, we (1) performed electronic testing for errors in accuracy and completeness, (2) discussed data reliability issues with agency officials knowledgeable about the data, and (3) reviewed relevant reports from the State Office of Inspector General and GAO and financial audits of the ICASS system. Although we found some areas of concern dealing with information security, we determined that the data were sufficiently reliable for the purposes of this report. Data showing estimates for future costs under the Capital Security Cost-Sharing Program were provided in a briefing by staff from the Bureau of Overseas Buildings Operations. The estimate for the average annual cost of maintaining American personnel overseas was developed by State’s Office of Rightsizing. To assess how well ICASS operates at posts, we visited seven posts and held telephone interviews with an eighth post. Selection of case study posts was based on a variety of factors, including geographic spread; a range in the size of posts; potential for reform; levels of service duplication; input from the ICASS Service Center, State’s geographic bureaus, and customer agencies; and posts’ availability. Based on the criteria, we collected information from the U.S. embassies in Bern, Switzerland; Cairo, Egypt; Conakry, Guinea; Dakar, Senegal; Dar es Salaam, Tanzania; Lima, Peru; San Jose, Costa Rica; and Vienna, Austria. In Vienna, we also conducted interviews with the U.S. Mission to the Organization for Security and Cooperation in Europe and with the U.S. Mission to the United Nations Agencies in Vienna. Due to national elections that corresponded with our scheduled work in Guinea, at the request of the Ambassador, we conducted telephone interviews with Embassy Conakry staff, rather than travel to the post. For our case study posts, we collected data and documentation from and conducted interviews with embassy personnel involved in ICASS, including Ambassadors and Charges d’Affaires, Deputy Chiefs of Mission, State management officers, ICASS staff, and customer agency managers and staff who work with ICASS, on the role of the ICASS Council and its decision-making process; mechanisms for ensuring quality services, including evaluating service provider performance and customer satisfaction; the degree to which customers understand ICASS goals and structures, and whether they agree that service quality matches ICASS costs; the level of ICASS training among council members and service providers, including foreign nationals; the management burden associated with ICASS, and the pros and cons the effect of the changing nature of agencies’ staffing (including State’s) on ICASS costs and quality of service; the effect of temporary duty personnel and regional staffing on ICASS whether agencies pay the full costs associated with their presence at the cost centers to which each customer agency subscribes; the cost centers to which each agency does not subscribe, the basis for not subscribing to those services, and how agencies provide for administrative support services to which they do not subscribe under ICASS; the effect that opting out of services has on other agencies; and the degree to which the ICASS Council has considered new approaches to providing ICASS services, including streamlining processes and adopting best practices developed by agencies at posts or by other posts in the region. Also at these overseas posts, we collected and analyzed information on the costs associated with agencies owning and operating motor vehicle fleets independent of ICASS and self-providing residential furniture for American direct-hire staff. In addition, we inspected warehouses and other support operation facilities and attended ICASS Council meetings when those meetings coincided with our visit. We conducted our work between April 2003 and August 2004 in accordance with generally accepted government auditing standards. Customers receive ICASS services by subscribing to “cost centers,” which are groups of similar services bundled into larger categories. “Workload factors” for each cost center are the primary bases by which customers are charged for services. These methodologies, developed in Washington, D.C., are applied to unit cost factors specific to posts to determine the actual fee an agency owes for services it uses. The unit costs are based on the salaries and benefits of service providers’ employees, who include both the staff actually delivering or providing the services as well as the direct-hire American managers overseeing the services; the furniture, equipment, and operating expenses necessary for delivering the services; and the total number of people serviced or the amount of service provided by the employees associated with specific cost centers. Overall, ICASS is implemented in one of two manners. An ICASS Standard post breaks the services into 32 cost centers, while an ICASS Lite post consolidates the number of cost centers into 16 groups (see table 2). Generally speaking, ICASS Lite tends to be used at small posts because the management burden is lower than at Standard posts. ICASS Standard, however, allows for greater flexibility to customers in choosing which services they will take and avoiding paying for services they do not receive. Agencies are required to subscribe to two cost centers—the Basic Package and the Community Liaison Office (CLO). The Basic Package cost center provides services by State that agencies would benefit from, whether or not they choose to use the services. Included in the Basic Package are diplomatic accreditation to the host government; licenses and special permits; maintenance of the Emergency Evacuation Plan; reciprocity issues with the host government on items such as car imports, spousal employment, and reimbursement for value-added taxes; identification cards, accounts receivable and payable, and other check- in/check-out procedures; welcoming kits for newly posted or temporary duty employees; maintenance of post reports; determination of exchange rates; International School accreditation surveys, grant management, and Suspense Deposit Abroad accounting and voucher processing; support for employee recreation centers and commissary boards; and support structures for visits by Very Important Persons. These items should be considered standard services at all posts, but individual posts may add to the list. The CLO provides services to help integrate employees and their dependents into the surrounding community. For example, the CLO provides welcoming materials, assists family members with employment and educational opportunities, and organizes cultural activities, among many other services. The overhead cost center is designed to reflect costs that are not easily confined to another cost center but are essential administrative activities. Examples of overhead costs include ICASS awards, post office box rentals, and postage. Overhead costs are distributed on the basis of each agency’s percentage of net cost of all services it receives in the remaining cost centers. There are also other costs that agencies must pay for that are not considered cost centers, per se. For example, ICASS personnel are both service providers and service customers. As such, the ICASS “office” is treated as any other customer or entity at post in terms of generating costs for the services it consumes. However, this “office” is not billed because the services it consumes are done so in the course of providing services to the other customers. For example, when a vehicle mechanic drives an ICASS motor pool vehicle to a parts supplier, he generates costs in the direct vehicle operations cost center. These costs, which include overhead, are distributed among customers on the basis of the proportion of total costs of services and overhead that each agency generates in a given cost center. In addition, costs associated with operations of the ICASS Service Center are distributed to agencies’ headquarters for general support given to posts worldwide, or to specific posts for services that are uniquely provided to them (e.g., post-dedicated ICASS training). Table 3 shows the number of posts and ICASS participation rates for agencies with direct-hire staff assigned to 10 or more posts in any year from 1998 through 2003. The participation rate equals the average rate of cost center subscription for each agency at all posts. The analysis excluded State. Participation rates for USAID reflect changes in agency coding, such that the rates for 1998 represent all of USAID (code 7200.0), while the rates for 1999-2003 represent only USAID Operating Expenses funds (code 7203.1). We acknowledge that there are services for which an agency has no need and, thus, they do not subscribe to them. For example, agencies that do not employ local staff have no need to subscribe to Locally Employed Staff Personnel Services. Because we could not determine agencies’ need for services, we were required to consider all cost centers as available for subscription. As a result, our analysis simply states the average rate at which agencies subscribe to available cost centers. However, we were able to control for cases in which agencies are located in facilities outside of State-owned or State-leased facilities. Examples could include instances when agencies own office facilities, as with some USAID and Peace Corps offices, and when agency personnel are located at host country ministries, among others. Agency personnel reported that in such cases, they neither have the need for some ICASS services, nor would the embassy provide these services. Specifically, these services would include (1) nonresidential local guard programs, (2) government owned/long-term leased residential building operations, (3) government owned/long-term leased nonresidential building operations, (4) short-term leased residential building operations, and (5) short-term leased nonresidential building operations. Therefore, in cases where agencies were not charged for these five services at a post, we removed them from the list of “available” cost centers and recalculated their rate of participation for those agencies. The following are GAO’s comments on the Department of State’s letter dated July 6, 2004. 1. We have modified our report to cite these legislative authorities. 2. We disagree with State’s assertion that ICASS goals do not include the containment or reduction of overall governmental costs. The Foreign Affairs Handbook clearly states that posts form interagency ICASS councils “to eliminate waste, inefficiency and redundancy” (6 FAH-5 H-102.1), and that “ICASS provides the tools and incentives to achieve significant reductions in support costs under the concept of a U.S. Government that ‘works better and costs less’” (6 FAH-5 H-103.2). The handbook also states that “all mission agencies participate in the management and delivery of services, as well as achievement of economies of scale and elimination of costly duplication” (6 FAH-5 H-103.1 (a)), and that “Councils should not be reluctant to challenge regulations which inhibit streamlining and cost reduction” (6 FAH-5 H-103.1 (b)). The handbook further states that “the Council and providers together share the responsibility and accountability for achieving the most cost efficient and streamlined quality administrative services at post” (6 FAH-5, H-307.1). The principle of voluntary service subscription serves multiple purposes, including ensuring that agencies receive and pay for only the services they need and providing flexibility for agencies when they need services they cannot conveniently receive through ICASS, among others. In addition, the principle was designed as the mechanism whereby agencies could use market forces to reduce ICASS costs and improve services. Customers’ ability to opt out of services would provide the incentive for customers and providers alike to cooperate in discovering the most cost-effective means for service delivery. Moreover, competitive alternatives that are advantageous to all agencies at a post were to be shared by the agency that discovered the alternatives and reviewed by the ICASS Council and service providers for their potential adoption postwide. The handbook states, “Rather than simply withdrawing from an ICASS service to take advantage of better or cheaper services, agencies should bring the alternative to the attention of the full Council for consideration by all agencies. Factors such as the effect on career staffs or economies of scale can then be considered mission-wide” (6 FAH-5 H-307.6 (a)). 3. We said the system is simple enough that most customers understood the basic structures and tenants of ICASS. We believe that if overseas staff receive training appropriate to their role in ICASS, the current system is simple enough for them to operate. We feel the complexity of the system is appropriate for balancing the somewhat contradicting principles of cost, equity, and simplicity. A less complex system may be less costly to operate, but may also be less equitable because customers may pay for services they don’t actually use. A system more closely resembling cost accounting would be more equitable in the sense that customers pay only for the services they actually use, but it would also be more costly because it would require higher workload burdens and more specialized skills for the employees that operate the system. Our discussion of the new temporary duty personnel policy makes no assertion why so few posts have chosen to adopt it. As of August 2004, three of our eight case study posts have adopted the new policy, including Embassies Cairo, Dar es Salaam, and Lima. Those that have not adopted the policy stated that the number of long-term temporary duty personnel they receive are so few that they do not create a burden. 4. On March 28, 2003, USAID notified the Dakar ICASS Council that effective October 1, 2003, USAID would no longer receive vehicle maintenance services. During our fieldwork in Dakar in December 2003, post officials stated they were unaware whether a reassessment of staffing needs related to changing workload requirements had been conducted. In July 2004, a post official confirmed the other post officials’ earlier statements that no reassessment of staffing needs was made at the time USAID notified the council of its intention to withdraw, although one vehicle mechanic was temporarily reassigned to service generators to fill an immediate need in facilities maintenance. The official also confirmed that USAID has not yet disclosed to the council the savings it expected to achieve or has actually realized under its outsourcing arrangement. The official did confirm that 10 vehicles have been added to the vehicle maintenance service, but he did not know when those vehicles were added in relation to USAID’s withdrawal. We believe that the addition of these 10 vehicles, whenever they were added, does not detract from our argument that overall government costs rose as a result of (1) the failure to reassess how changing workload requirements affected staffing needs at the time USAID announced it would withdraw from the service and (2) the failure by all at post to assess whether USAID’s competitive alternative could result in reduced costs for all agencies at post. 5. We do not blame State for ICASS cost increases from 2001-2003. According to data and officials from the ICASS Service Center, there are three primary reasons why costs increased between 2001 and 2003: State’s hiring under the Diplomatic Readiness Initiative, infrastructural improvements, and wage and price increases. Under ICASS, salaries and benefits for State officers who administer ICASS at overseas posts, such as those in the General Services Offices, are shared among multiple agencies. When services are added to ICASS, the participants in these services share the associated costs. State is correct that there have been some services added to ICASS for which State had previously paid, including $20 million annually for mail pouching services and $15 million for computer system and cabling upgrades. Adding these services to ICASS resulted in increased cost to non-State agencies, although not necessarily to the government as a whole. However, we did not intend to imply these services were added without the consent of agencies on the ICASS Executive Board. 6. We believe that there may be other legitimate reasons for not enrolling in ICASS services, including logistical considerations (i.e., an agency’s proximity to the service provider); whether an agency’s headquarters provides the service; or whether the agency even needs the service, among others. We generally support the voluntary nature of ICASS but believe that detailed, objective analyses are needed to assess whether an agency should obtain services from ICASS. 7. Although customers at posts we visited indicated they were generally satisfied with the overall quality of ICASS services, they were not satisfied with the cost. Comments on a draft of this report from many non-State agencies demonstrate that they are not satisfied with the costs of ICASS services. (See apps. VI-XIII.) 8. We believe that the Foreign Affairs Handbook grants ICASS Councils more authority over ICASS resources than simply approving annual ICASS budgets. The handbook states the following: “Customer agencies, as stakeholders with a greater voice in the management of shared administrative services, are empowered to collectively seek innovative ways to reduce costs and improve services. To these ends, the Council may streamline administrative processes or reshape the administrative workforce. Decisions might include downsizing, delayering and flattening of the staff organization; use of qualified local hire specialists in lieu of higher cost U.S. based staff; and alternative agency or contract service providers. The Councils may also consider use of the services of U.S. Embassies and Agencies in other countries where costs are lower” (6 FAH-5 H-307.2 (a)). The handbook further states these decisions should be made in close consultation with service providers “in light of management or cost studies developed by or at the request of the Council” and “to facilitate this process, the service provider will be expected to provide the Council financial breakdowns, staffing patterns, and operational studies as requested” (6 FAH-5 H-307.2 (b)). We believe these clauses provide customer agencies with the authority to review how ICASS services are delivered, including whether services are provided in- house or from an external source, and the number and type of embassy staff needed. However, the handbook also states councils “should avoid micromanagement of the service provider activities” because the councils are not intended to serve as supervisors of “the administrative service provider in the day-to-day details of operations” (6 FAH-5 H-307.3 (b)). A State official with the ICASS Service Center said micromanagement of service provider personnel is strongly discouraged, and that councils generally can affect instances only when new positions are being added. That is, examinations of how and by whom services are provided are considered micromanagement on the part of the council, and are discouraged, as we demonstrated with the USAID Dakar proposal to pilot test a new method for providing residential maintenance. Thus, based on the handbook, State administrative officers’ management practices, and illustrations such as the one previously mentioned, we concluded that ICASS councils have little ability to fully manage ICASS resources. 9. We agree with the principles behind the working capital fund and would encourage posts to make greater use of it. Our purpose was only to report the perceptions among post personnel that they would lose funding in the long run if they made frequent use of the fund. We did not conduct analyses to determine whether that belief was based on verifiable evidence. 10. We modified the report text, where appropriate, to incorporate this additional information and suggested wording. 11. We made no comments on the merits of moving to a unified housing and furniture pool. We did not intend to criticize or challenge the ambassador’s authority as the Chief of Mission or as the President’s representative. We have revised the section to clarify that we are not expressing an opinion on Chief of Mission authority; rather, we are saying that differing authorities can overrule ICASS decisions and that both customers and providers at posts reported that these instances can negatively affect the morale of some ICASS participants. 12. We agree that centralization of certain functions is necessary to instill order on the system. Our intended point is that some providers and customers perceive centralization as limiting post flexibility, and, as such, some post officials question the degree by which they are truly empowered to operate the system in the best manner for the post. The following is GAO’s comment on the U.S. Agency for International Development’s letter dated June 28, 2004. The agency also provided technical comments that were incorporated into the text, as appropriate. 1. We did not intend to suggest that duplication was the primary contributor to inefficient operations. We have made several modifications to the report to emphasize that improved business practices and reduction in duplication are equally important. Our recommendations address both the elimination of unnecessary duplication and the reengineering of administrative processes to contain costs. We acknowledge in the report that agencies have many reasons for self-providing services and that some are justifiable. The following are GAO’s comments on the Department of Agriculture’s letter dated July 9, 2004. The agency also provided technical comments that were incorporated into the text, as appropriate. 1. We agree that there is a relationship between the efficiency and costs of ICASS services and the existence of duplicative administrative services at some posts. This is why our recommendations address the elimination of unnecessary duplication and the reengineering of administrative processes. We believe that these actions together can improve the efficiency of ICASS services and help contain costs. We did not intend to suggest that duplication was the primary contributor to inefficient operations. We have made several modifications to the report to emphasize that improved business practices and reduction in duplication are equally important. 2. We agree that opting out of a service does not always result in higher overall costs to the government. However, when an agency opts out and obtains a service outside of ICASS, there is potential for unnecessary duplication, and opportunities to achieve economies of scale may be lost. Moreover, when an agency opts out and ICASS does not take action to adjust costs, such as reducing support staff to reflect the reduced workload, the operation becomes less efficient and more costly to the remaining users. 3. We generally support the voluntary nature of the ICASS program because agencies’ needs differ. Therefore, we did not intend to suggest that agencies should be forced to use ICASS services. However, we believe that there are opportunities to achieve more economies of scale, and that there are instances of unnecessary and wasteful duplication. Our recommendations are designed to reduce duplication where this would be in the best interests of the government and to encourage agencies to prepare business cases to support decisions to obtain services from outside of ICASS. Such business cases could demonstrate that there are financial and other benefits of obtaining services outside of ICASS. 4. Individual agency decisions regarding participation in ICASS and how to obtain support services may have a substantial impact on other agencies at a post. Therefore, we believe that business cases should address the overall impact on the U.S. government. Having each agency fend for itself is contrary to the ICASS concept and will not lead to cohesive and efficient operations within the executive branch. However, we recognize that there may be trade-offs between what is best for an individual agency and what is best for the government as a whole. We believe that business cases that analyze all facets of financial and other implications of decisions to opt out of ICASS services will encourage better decision making. The following are GAO’s comments on the Department of Commerce’s letter received June 29, 2004. 1. We believe both reduction in duplication and reengineering of current ICASS services are needed to contain ICASS costs, and we believe customer agencies and State need to work together to achieve this. Since ICASS is a market-based approach to delivering services, we also believe agencies should exercise their rights to consider innovative alternatives for service delivery and to make the benefits of cost- effective alternatives available to other agencies at post. 2. We did not intend to blame duplication on agencies that have withdrawn from ICASS services. We presented the Dakar vehicle maintenance example to illustrate how decisions by one agency can affect all agencies at post. We faulted USAID neither for the reason nor for the action of opting out of the service. We did note, however, that USAID did not share detailed information (i.e., its business plan) on the new means by which they would receive the service. We also noted that neither the council nor the service provider requested that USAID share this information. As a result, the Dakar ICASS Council missed an opportunity to review whether the post could adopt the USAID approach to the betterment of all agencies. We recommended that business cases be made not only to help agencies determine whether an alternative arrangement is better for themselves, but also to help local ICASS Councils determine whether more cost-effective service arrangements could be applied postwide. 3. We agree that there may be legitimate reasons for agencies to opt out of ICASS services. It is for this reason we conclude and recommend that agencies should work to reduce unnecessary duplication of administrative structures. 4. Our determination that ICASS customers are generally satisfied with the quality of services they receive was based on a global survey, local customer satisfaction surveys at our case study posts, and more than 100 interviews with ICASS customer and service provider personnel at those posts. We do cite service cost as the main complaint with the system. 5. We believe implementing our recommendations could result in a more streamlined, cost-effective means for delivering necessary administrative support services. We did not conduct an assessment of the benefits and costs of creating an independent agency responsible for delivering overseas administrative support services. The following are GAO’s comments on the Department of Defense’s letter dated July 2, 2004. 1. We did not intend to underemphasize cost control, and we modified the report to add more emphasis to the importance of cost containment. We stated that labor is the largest ICASS cost and that agencies cite high labor costs associated with American direct-hire ICASS personnel as a reason for self-providing services. Our recommendation to increase the system’s accountability by streamlining operations is designed to encourage cost control and service provider rightsizing. 2. The Foreign Affairs Handbook requires ICASS Councils to (1) monitor service performance and costs, concentrating on overall performance against standards and (2) prepare an annual written assessment on the quality and responsiveness of the services furnished by the service provider based upon the agreed-upon performance standards (6 FAH-5 H-301.4 (f) and (g)). We believe that councils that do not evaluate their service providers miss important opportunities to measure service provider performance and to address items that could make service delivery more cost-effective. 3. We modified the text to clarify that many agencies decided not to subscribe to some ICASS services. 4. We support Chief of Mission authority at post and did not mean to imply that ICASS is outside of that authority. Our main point was that ICASS authorities sometimes become subordinate to other authorities, and we modified the text of the report to more clearly reflect this observation. 5. We did not mean to imply that all ICASS decisions and policies should be made or be negotiable at the post level. We intended to highlight some ICASS customers’ concern that ICASS lacks flexibility to adapt to meet unique local needs, thus limiting the local council’s ability to make decisions that optimize service provision at post. 6. We agree with the department that individual posts have the authority to determine whether they wish to implement the new policy for long- term temporary duty personnel. Our discussion of the new temporary duty personnel policy makes no assertion as to why few posts have chosen to adopt it. Three of our eight case study posts have adopted the new policy (Embassies Cairo, Dar es Salaam, and Lima). Officials at those posts that have not adopted the policy stated that temporary duty personnel do not result in an undue burden on their respective posts. 7. We agree that there may be legitimate reasons for agencies to opt out of ICASS services, and we reflect this throughout the report. 8. Several department field staff stated that they either had not had ICASS training before arriving overseas or that training in their ICASS roles and responsibilities was not sufficient. The following are GAO’s comments on the Department of Homeland Security’s letter dated June 29, 2004. 1. We generally support the voluntary nature of the ICASS program because agencies’ needs differ. Therefore, we did not intend to suggest that agencies should be forced to use ICASS services. However, we believe that there are opportunities to achieve more economies of scale, and that there are instances of unnecessary and wasteful duplication. Our recommendations are designed to reduce duplication where this would be in the best interests of the government and to encourage agencies to prepare business cases to support decisions to obtain services outside of ICASS. 2. We agree that ICASS operations at some posts are more transparent than at others, but we did not find evidence that services were being managed to the benefit and priorities of State and the detriment of other agencies at the posts we visited. However, we believe that our recommendations regarding system accountability and training should help address this concern. 3. The salary and benefit costs of State employees who provide support services to all agencies are shared by all agencies that receive the services. For example, costs to employ a foreign national driver are charged to the agencies that receive services from the driver, while costs to employ an American Financial Management Officer are divided among the agencies that make use of financial services the manager provides. We support this practice largely because it is consistent with the overall rightsizing concept of agencies paying the full cost associated with their overseas presence. 4. Agencies can choose whether to participate in an ICASS service. If an agency does not participate and, therefore, does not pay for a service, that agency should not receive the service. Otherwise, the agencies that have chosen to participate in the service would effectively be subsidizing the cost of providing the service to the agency that did not participate but still wanted the service. There are ICASS provisions for cases where an agency that wishes may receive partial services for a reduced cost, as well as methods for an agency to make direct payments for use of a service that benefits that particular agency. 5. We agree. This is why decisions to not obtain services through ICASS may not always be in the best interests of the government. The following are GAO’s comments on the Department of Justice’s letter dated June 30, 2004. The agency also provided technical comments that were incorporated into the text, as appropriate. 1. We agree that there may be legitimate reasons for agencies to opt out of ICASS services, and we reflect this throughout the report. We also stated that agencies cited affordability of services as a reason for not subscribing. 2. Our report focuses on the delivery and costs of support services, and we do not examine in detail the annual ICASS budget process. Nonetheless, several agencies reported that this process is problematic because it requires that agencies predict costs and request funding well before they know what their actual costs are likely to be, and agencies have little flexibility in paying for cost increases resulting from unforeseen events that occur subsequent to their funding requests. Although we do not address this issue on our report, we believe it is something the ICASS Executive Board could consider when implementing our recommendations. 3. We agree that agencies should be notified in advance of changes in policy and staffing that would affect their contributions. However, implementation of the proposed Capital Security Cost-Sharing Program would be separate from ICASS, and therefore we made no assessment of its merits or governance structures. We included discussion of the program only to show its potential impact on ICASS. The following are GAO’s comments on the U.S. Peace Corps’ letter dated June 25, 2004. 1. We did not intend to suggest that duplication was the primary contributor to inefficient operations. This is why our recommendations address the elimination of unnecessary duplication and the reengineering of administrative processes. We believe that these actions together can improve the efficiency of ICASS services and help contain costs. We have made several modifications to the report to emphasize that improved business practices and reduction in duplication are equally important. We presented the Dakar vehicle maintenance example to illustrate how decisions by one agency can affect all agencies at post. We faulted USAID neither for the reason nor for the action of opting out of the service. We did note, however, that USAID did not share detailed information (i.e., its business plan) on the new means by which they would receive the service. We also noted that neither the council nor the service provider requested that USAID share this information. As a result, the Dakar ICASS Council missed an opportunity to review whether the post could adopt the USAID approach to the betterment of all agencies. We recommended that business cases be made not only to help agencies determine whether an alternative arrangement is better for themselves, but also to help local ICASS Councils determine whether more cost-effective service arrangements could be applied postwide. 2. We agree that agencies have different administrative support service requirements. This is why we generally support the voluntary participation principle of ICASS. 3. We said these were non-ICASS issues because they would exist whether or not ICASS was in place. 4. The Foreign Affairs Handbook states that ICASS Councils “assume the responsibility and exercise the initiative to install the infrastructure and administer the system the way they think is right for their environment” (6 FAH-5 H-301). We did not intend to imply that all ICASS decisions should be made locally; we agree that centralization and standardization of some functions can eliminate inconsistencies in ICASS implementation. In addition to the individual named above, Jeffrey Baldwin-Bott, David G. Bernet, Janey Cohen, Etana Finkler, Jane S. Kim, and Julia Kennon made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | Costs for overseas posts' administrative support services have risen nearly 30 percent since fiscal year 2001, reaching about $1 billion in 2003. These costs are distributed among 50 agencies through the International Cooperative Administrative Support Services (ICASS) system, which was designed to reduce costs and provide quality services in a simple, transparent, and equitable manner. Since ICASS was implemented in 1998, its performance has not been systematically reviewed. GAO was asked to examine (1) whether ICASS has led to efficient delivery of administrative services and (2) whether ICASS is an effective mechanism for providing quality services. ICASS has not resulted in more efficient delivery of administrative support services because it has neither eliminated duplication nor led to efforts to contain costs by systematically streamlining operations. GAO found that agencies often decide not to use ICASS services and self-provide support services--citing reasons of cost, programmatic needs, and greater control--which can lead to duplicative structures and a higher overall cost to the U.S. government. Although some agencies' reasons for self-providing services may be supportable, GAO found that agencies rarely made business cases for why they chose not to take ICASS services initially or withdrew from services later. In addition, service providers and customer agencies have undertaken few systematic efforts to consolidate services or contain costs by streamlining administrative support structures. Furthermore, GAO found that deterrents to consolidating and streamlining administrative structures largely outweigh the incentives. However, there are efforts, both internal and external to ICASS, that may address some of the obstacles that prevent ICASS from operating more efficiently. Based on the system's primary goals, ICASS is generally effective in providing quality administrative support services in an equitable manner, although not to the extent that it could be if certain impediments were addressed. GAO found that ICASS is simple and transparent enough for customers to understand its basic principles. Furthermore, most personnel agree that ICASS is more equitable than its predecessor. However, ICASS strategic goals lack indicators to gauge progress toward achieving them, and progress toward achieving posts' performance standards is not annually reviewed or updated. Other obstacles to maximizing ICASS include limits to overseas staffs' decision-making authority, which can diminish ICASS's goal of "local empowerment." Finally, GAO found that training and information resources, which could enhance participants' knowledge and implementation of ICASS, are underutilized. |
The 2001 Nuclear Posture Review envisioned that the New Triad would include the majority of current and planned conventional strike capabilities, as well as a family of unique global strike capabilities, to address the new security risks faced by the United States. Current global strike assets could include long-range precision attacks delivered from aircraft or naval platforms, such as B-52H bombers equipped with conventional air-launched cruise missiles and surface ships and submarines outfitted with sea-based Tomahawk land attack missiles. Use of nonkinetic capabilities, such as information operations, may also be needed to defeat an adversary’s capability to deny U.S. forces access to areas or to achieve the surprise necessary to defeat high-value/high-payoff targets such as weapons of mass destruction, leadership, or command and control capabilities in more difficult environments. Successful conduct of global strike operations also is likely to require several enabling capabilities such as intelligence collection and dissemination, surveillance and reconnaissance, command and control, communications, and battlefield damage assessment to support all aspects of the planning and conduct of missions. Most of the platforms, weapons, nonkinetic assets and supporting command, control, communications, and computers and intelligence, surveillance, and reconnaissance capabilities used to support the global strike mission are not unique to global strike. These assets also provide general purpose capabilities used to support a variety of other missions conducted by the geographic combatant commands. However, DOD is studying several new capabilities to address shortfalls in prompt and global range conventional capabilities. Many DOD organizations, including the Joint Staff, military services, combatant commands, and defense agencies, have responsibilities for developing and implementing the global strike concept, identifying and acquiring needed capabilities, and conducting global strike missions. Within the Office of the Secretary of Defense, two organizations have key responsibilities for overseeing and managing global strike related activities: The Office of the Under Secretary of Defense for Policy is responsible for developing the policy and guidance for global strike. The office is also responsible for preparing DOD’s annual report to Congress on global strike, which includes information on the purpose, mission, assets, potential target, desired capabilities, and conditions for execution. The Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for providing oversight for the development and fielding of global strike capabilities. The office also has responsibilities for various DOD initiatives to improve the department’s acquisition processes and management of investments. Additionally, the Office of Program Analysis and Evaluation in the Office of the Secretary of Defense is responsible for assembling and distributing the FYDP, which is an automated database that DOD uses to report the estimated projected resources and proposed appropriations to support DOD programs, projects, and activities, including those related to global strike capabilities. The FYDP includes cost estimates for the fiscal year reflected in the current budget request and at least 4 subsequent years. The Joint Staff is responsible for providing oversight of the Chairman of the Joint Chiefs of Staff’s Joint Capabilities Integration and Development System process to identify improvements to existing capabilities and guide development of new capabilities from a joint perspective that recognizes the need for trade-off analysis. The various global strike analyses conducted as part of this process are intended to result in a set of potential solutions, including additional resources or changes to doctrine and training designed to correct capability shortcomings. The Joint Staff, along with the Commander of the U.S. Joint Forces Command, have responsibilities for overseeing development of joint doctrine and managing and providing support for joint exercises of the combatant commands. Additionally, the military services and defense agencies also have important roles in identifying and acquiring potential technologies and weapons systems development programs that could provide global strike capabilities. The U.S. Strategic Command has a significant role in implementing the global strike concept and supporting global strike operations. For its global strike mission, the command is responsible for providing integrated planning and command and control support to deliver rapid, extended range, and precision kinetic and nonkinetic effects in support of theater and national objectives, and in some situations, executing command and control of selected global strike missions. The command also advocates for global strike capabilities on behalf of the combatant commands, services, and defense agencies through such means as preparing and reviewing global-strike-related documentation and analyzing needed capabilities. The command supports other combatant commands during day-to-day operations by integrating their capabilities for potential global strike missions through training, exercises, and planning activities. During a crisis, the command, in close coordination with other combatant commands, would develop plans and courses of action for executing global strike missions on very tight timelines for the Secretary of Defense or the President. The command also has responsibilities for other mission areas that support global strike, including oversight of intelligence, surveillance, and reconnaissance, and global command and control; DOD information operations; space operations; and integrating and synchronizing DOD’s efforts in combating weapons of mass destruction. DOD has taken a number of steps to implement its global strike concept and has generally assigned responsibilities for the planning, execution, and support of global strike operations. However, key stakeholders, particularly the geographic combatant commanders, have different interpretations of the scope, range, and potential use of capabilities needed to implement global strike and under what conditions global strike would be used in U.S. military operations. Several factors affect understanding and communication of the global strike concept among key stakeholders, including the extent to which DOD has (1) defined global strike, (2) incorporated global strike into joint doctrine, (3) conducted outreach and communication activities with key stakeholders, and (4) involved stakeholders in joint exercises and other training involving global strike. Without a complete and clearly articulated concept that is well communicated and practiced with key stakeholders, DOD could encounter difficulties in fully implementing its concept and building the necessary relationships for carrying out global strike operations. DOD has taken a number of steps to implement its global strike concept since completing its 2001 Nuclear Posture Review, which provided the rationale for global strike. Specifically, the U.S. Strategic Command has played a major role in DOD’s implementation of global strike by helping to shape the concept, developing new processes and procedures, and providing inputs in identifying and developing new capabilities. Since issuing its 2001 Nuclear Posture Review, DOD has conducted several analyses to evaluate desired capabilities and identify capability gaps. In January 2005, DOD completed the Global Strike Joint Integrating Concept, which describes how a global strike joint task force would operate, the time frame and environment in which it must operate, its required capabilities, and its defining physical and operating characteristics. The concept was used as input for analyses conducted under the Joint Staff’s Joint Capabilities Integration and Development System requirements process to identify the desired capabilities and shortfalls in current global strike capabilities. The first two of the three analyses, the functional area analysis and functional needs analysis, were completed in 2005. The functional area analysis synthesized existing guidance to specify the military problem to be studied. The analysis identified the specific military tasks the force is expected to perform, the conditions under which these tasks are to be performed, and the required standards of performance. The functional needs analysis examined that problem; assessed how well DOD can address the problem given its current program; identified capability gaps and shortfalls, risk areas, and redundancies; and identified the capabilities DOD should develop. The last of the analyses, Global Strike Raid Evaluation of Alternatives, will make recommendations on potential approaches to overcome capability gaps by way of doctrine, organization, training, materiel, leadership, personnel, and facilities actions. The Joint Staff plans to complete this analysis in May 2008. DOD also has several similar analytical efforts underway or completed, such as the Air Force-led Prompt Global Strike Analysis of Alternatives, to identify potential weapons systems solutions for global strike. Moreover, the U.S. Strategic Command has been implementing its assigned planning and command and control support responsibilities for the global strike mission. In addition to the support its headquarters provides for DOD efforts to implement and develop global strike capabilities, the command established a joint functional component command for global strike and integration to provide day-to-day management for its global strike mission. The command has also initiated several activities including improving processes and procedures for command and control, communications, and decision making and the management of intelligence, surveillance, and reconnaissance assets, and incorporating global strike operations into its concept plan. For example, development of adaptive planning systems such as the theater integrated planning subsystem and the integrated strategic planning and analysis network will allow Strategic Command planners to collaborate with and support the geographic combatant commands. While key stakeholders have been involved in various global strike development efforts, global strike is interpreted differently among combatant command and service officials, who have significant roles and responsibilities in planning, coordinating, and executing global strike operations. DOD, Joint Staff, combatant command, and service officials we spoke with generally believe that global strike is a broad and unbounded concept that could include a wide range of forces and other capabilities and involve different scenarios. As a result, the concept can be difficult to understand and creates different interpretations among stakeholders. For example, officials from the services offered a range of different interpretations of global strike operations: At a roundtable discussion we held with a number of officials at the U.S. Pacific Air Force Command, which supports the U.S. Pacific Command, the consensus reached was that global strike was a mission associated with the U.S. Strategic Command and the strikes conducted would originate from the continental United States. Some of the officials said that global strike was a special capability reserved only for the President, Secretary of Defense, and a Joint Force Commander. U.S. Pacific Fleet headquarters representatives told us that global strike implied that the capability would originate from outside the geographic command’s region and would not include maritime-based targets. Air Force Air Combat Command representatives told us that they viewed global strike as encompassing a mission that was an autonomous event; had a global element; occurred in days rather than months; and involved no build-up of forces in the area of the strike prior to the mission. Additionally, U.S. Pacific Command and U.S. Central Command officials we spoke with had difficulty distinguishing the differences between global strike and theater strike operations, which are strike operations conducted by a geographic command against potential targets within its area of responsibility. U.S. Pacific Command headquarters officials told us that they did not see a clear distinction between the characteristics and objectives of global strike and a theater strike. The officials said that operations in theater conducted by their command would address all potential targets, including high-value ones that are also identified as potential targets for global strike. Some Pacific Command officials viewed global strike as a unique capability that is requested by the theater commander when it is considered a better option. Other Pacific Command officials said the only difference between the two types of strike operations is whether the U.S. Strategic Command or the affected combatant command is assigned lead responsibility for the planning and/or execution of the operation. U.S. Central Command officials also agreed that global strike is currently a broad and unbounded concept that can, depending upon interpretation, take in much of current theater operations. We identified four factors that have led to stakeholders’ varying perceptions of the global strike concept. These factors include the extent to which DOD has (1) defined global strike, (2) incorporated global strike into joint doctrine, (3) conducted outreach and communication activities with key stakeholders, and (4) involved stakeholders in joint exercises and other training involving global strike. However, while DOD has taken some actions to address each of these factors, further management actions are needed to foster better understanding and communication with key stakeholders for global strike. DOD uses several definitions to describe global strike in its key studies, reports, and other documents. However, various officials from a number of DOD organizations do not believe these definitions provide a clear and consistent description of global strike. According to officials in DOD’s Program Analysis and Evaluation Office, global strike is not well-defined and the term can mean different things among the combatant commands, services, and DOD organizations. DOD Program Analysis and Evaluation Office officials said that while a Senior Warfighter Forum in August 2006, which was led by the U.S. Strategic Command and included participants from the services, combatant commands, and defense agencies, was able to reach a consensus on a list of attributes for global strike capabilities, the forum was unable to agree on a common definition for global strike. A senior official in DOD’s Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics said that DOD does not have a common definition for global strike or for prompt global strike. Acquisition, Technology, and Logistics officials told us, however, that DOD intends to reach agreement with stakeholders on a common global strike definition through the series of ongoing studies on potential global strike weapons systems that are nearing completion. Table 1 provides some examples of the definitions used within DOD to describe global strike. The lack of a universally accepted definition has hindered some of the services from incorporating global strike into their documents. U.S. Pacific Fleet representatives told us that because DOD has not provided a common definition or bounded the global strike concept and mission very well, the Pacific Fleet has not incorporated global strike into its planning and training activities and documents. Additionally, Department of the Army headquarters officials told us that, due to the lack of an approved joint definition of global strike, the Army has yet to incorporate global strike into its documents and publications. The officials also said the role and responsibilities and contributions of the Army for global strike operations have not been clearly defined in global strike documents. The Army would likely play a role in global strike operations by deploying follow-on forces after a global strike attack to assess battle damage and provide security and civil operations, according to the officials. Officials at the U.S. Special Operations Command told us that the lack of a universally accepted common definition would not affect the successful planning and execution of a global strike operation. The officials said that should a decision be made to conduct a global strike operation, the specific details will be provided in various orders prior to the operation. However, the officials stated that an agreed upon definition that gave a specific description of the global strike would provide everyone with a common point of departure and clearer understanding of the term. U.S. Central Command officials similarly agreed that a clear, accepted joint definition would help to promote a more consistent interpretation of global strike and what it entails. According to Air Force headquarters officials, while the Air Force has developed a definition that focuses on its own forces’ contributions and support for global strike, a joint definition that is generally accepted and used throughout DOD would provide common ground among the services and DOD organizations for discussing and interpreting global strike. Officials in the Joint Staff’s Force Structure, Resources and Assessment Directorate likewise agreed that a universally accepted global strike definition would promote greater acceptance and understanding among DOD organizations. While the Joint Chiefs of Staff has included a short description of global strike and the responsibilities of the U.S. Strategic Command for the global strike mission in two joint doctrine publications for homeland security and homeland defense, it has not included a more detailed discussion of global strike operations in any other existing or proposed doctrine publication. Joint doctrine consists of the fundamental principles that guide the employment of military forces in a coordinated action toward a common objective and is meant to enhance the operational effectiveness of U.S. forces. According to officials in the Joint Doctrine Group at the U.S. Joint Forces Command, global strike and the mission responsibilities of the U.S. Strategic Command were included in the homeland security and homeland defense joint publications to cite an example of a possible preemptive and/or offensive action that could be taken in defense of the United States. The officials said the publications were not intended to provide a comprehensive and specific discussion of global strike operations but rather to discuss how global strike would contribute to homeland security and defense objectives. Although a proposed joint publication on strategic attack was to include a more detailed discussion of global strike, the publication was cancelled and there have been no other proposals for incorporating such a discussion in any new or existing joint publication. Officials in the Joint Forces Command’s Joint Doctrine Group said that a detailed discussion of global strike was to be included in a proposed joint publication on strategic attack, which would have focused on the strategic effects achieved at the theater operational and/or tactical levels of war. In June 2005, U.S. Strategic Command, the lead sponsor for the new publication, submitted a draft publication for review but the publication was subsequently cancelled after it was determined to be inconsistent with the approved Joint Staff guidance for preparing the publication. According to Joint Doctrine Group officials, the proposed publication on strategic attack would have overlapped with other publications and did not provide any unique doctrinal fundamentals that were not already covered in existing doctrine. According to officials in Joint Forces Command’s Joint Doctrine Group, a proposal to include a more comprehensive discussion of global strike in joint publications could be made to the Joint Staff and their group would be responsible for conducting an analysis to determine the proposal’s validity. However, the officials said they were not aware of any action by the U.S. Strategic Command or another organization to propose that global strike be considered for a new joint publication or incorporated into an existing one. The Joint Doctrine Group officials told us they believe that a proposal has not been made because the joint community may not consider global strike to be mature enough and therefore be reluctant to incorporate it into joint doctrine until the concept is better defined and demonstrated in joint exercises and actual crises. U.S. Strategic Command officials told us that their command had no current plans to resurrect the strategic attack publication or propose one for global strike. Although some officials in the joint community say that incorporating global strike into joint doctrine is premature, several DOD officials said that developing joint doctrine would promote understanding and implementation of the concept. The Air Force’s Air Combat Command and U.S. Central Command officials told us that there is sufficient reason to begin developing or incorporating global strike into existing doctrine for those forces and capabilities that are currently available to conduct a global strike operation. The Air Combat Command officials said that because of the 2-year process to develop doctrine, it makes sense to begin creating joint doctrine now for current capabilities. The officials added that the resulting doctrine would be revised as additional global strike capabilities, such as advanced prompt global strike systems, become available. Additionally, a U.S. Central Command official stated that the development of joint doctrine would help clarify the global strike concept because it could assist operational planners in explaining the situations where global strike would be a good option and the responsibilities and expectations of the U.S. Strategic Command and the geographic commands. Central Command officials said that doctrine also could distinguish global strike from other types of strike operations conducted by geographic combatant commands. According to the Joint Chiefs of Staff’s instruction on the development of joint doctrine, joint doctrine standardizes the terminology, training, relationships, responsibilities, and processes among all U.S. forces to free joint force commanders and their staffs to focus efforts on solving the strategic, operational, and tactical problems confronting them. Without a more comprehensive inclusion of global strike within joint doctrine for current capabilities, the combatant commands and services will not have complete guidance to further their understanding and effectively plan, prepare, and deploy forces for global strike operations. Although the U.S. Strategic Command has taken steps to explain and promote understanding of global strike operations and its mission responsibilities, various geographic combatant command and service officials we spoke with generally said that the Strategic Command should increase its global strike outreach activities (e.g., visits, briefings, and education) to reach more staff throughout the commands and services. These officials also said that these activities should be provided on a continuous and consistent basis to reach command and service staffs that experience frequent turnover. As part of its responsibilities for the global strike mission, the Strategic Command supports other combatant commanders and integrates the capabilities of all affected combatant commands through training, exercises, and planning for both theater interests and potential global strike missions. In our prior work in identifying key practices adopted by organizations undergoing successful transformations, we found that it is essential for organizations to adopt a communication strategy that provides a common framework for conducting consistent and coordinated outreach within and outside its organization and seeks to genuinely engage all stakeholders in the organization’s transformation. U.S. Strategic Command officials have conducted visits, provided briefings, and assigned liaison staff to the geographic combatant commands to promote understanding of its global strike mission and responsibilities. The Strategic Command, according the command’s liaison to U.S. Central Command, initiated a visit to the Central Command in October 2006 to provide a briefing on all of the command’s missions and activities, including global strike. The liaison said that the visit provided an opportunity for Central Command’s staff to gain perspectives on global strike and the Strategic Command’s mission responsibilities. Similarly, U.S. Special Operations Command officials told us that the Strategic Command’s joint functional component command for global strike and integration commander provided a global strike mission briefing to U.S. Special Operations Command’s senior leadership in August 2006. However, while Strategic Command officials are generally satisfied with the existing communications, a number of other combatant commands are looking for additional support. U.S. Pacific Command officials told us that while the Pacific Command has established a close relationship with the Strategic Command over the past few years, the command is still learning about Strategic Command’s mission responsibilities, particularly for global strike. According to Pacific Command officials, the U.S. Strategic Command’s liaison officer provided an outreach briefing in early 2007 to their organization which included information on the global strike concept. The officials said that similar briefings should be given regularly throughout the command because of the constant turnover of staff. According to the U.S. Strategic Command’s liaison at the U.S. Pacific Command, it does not appear that information on global strike is getting out to all of the Pacific Command staff. The liaison based his statement on comments made by Pacific Command staff to GAO during a March 2007 visit to the command. This indicates, according to the liaison, that the Strategic Command should provide briefings and hold discussion sessions with more of the Pacific Command organizations, particularly on how global strike operations fit into the theater commander’s plans and differ from other types of theater operations. Air Force Space Command officials told us that the Strategic Command should provide thorough and updated education and communication of its prompt global strike mission with the geographic combatant commands to increase understanding and mitigate any misconceptions the commands may have about the conduct of global strike operations in their regions. The officials said that it is important for the Strategic Command and other combatant commands to establish a consistent dialogue on their roles and responsibilities and the use of global strike weapons before any new prompt global strike weapon is deployed. Similarly, a U.S. Central Command official said that the Strategic Command should conduct more outreach activities for global strike with combatant command staffs to explain the global strike concept and the relationships with other commands. Additionally, U.S. Special Operations Command officials told us that while they found the Strategic Command’s Web site beneficial, it was not widely known among the command’s staff. While the U.S. Strategic Command has taken several positive actions to promote global strike and its mission, without a consistent and comprehensive outreach strategy the command may not reach the combatant commands and services to the extent needed to foster acceptance and understanding of global strike. As a result, the command may encounter difficulty in future global strike implementation efforts. Joint exercises and other training activities can provide opportunities for service and combatant command staffs to practice operational procedures and processes to increase their understanding of global strike. However, global strike has only been included in a few major joint exercises, largely those sponsored by the U.S. Strategic Command, over the past 2 years. The U.S. Strategic Command has incorporated global strike and its other missions into its annual joint command exercises. Beginning with the command’s Global Lightning exercise in November 2005, the Strategic Command has included global strike objectives in its annual Global Lightning, Global Storm, and Global Thunder exercises. According to Strategic Command officials, representatives from some of the other combatant commands have participated in portions of these exercises, while other combatant commands, such as the U.S. Central Command, may not always participate because of other commitments. A Strategic Command joint exercise division official said, however, that some global strike objectives have been incorporated into recent exercises sponsored by U.S. Pacific, European, and Special Operations Commands. For instance, global strike time-sensitive planning has been included in Special Operations Command’s Able Warrior exercises. U.S. Strategic Command officials told us that while global strike needs to be incorporated to a greater extent in joint exercises, it is often difficult because of differing exercise objectives. For example, a senior official in the Strategic Command’s Joint Functional Component Command for Global Strike and Integration said that including global strike objectives in joint exercises other than those of Strategic Command can be challenging because it is often difficult to create scenarios that make sense for executing a global strike mission considering other primary exercise objectives. U.S. Central Command, for example, has not included global strike in the joint exercises it sponsors. Additionally, officials in U.S. Strategic Command’s exercise branch told us that other combatant commands are hesitant to add objectives that could lessen the focus on the primary exercise objectives. As a result, Strategic Command officials said that it can also be difficult to overlap its exercises with those of another command. For example, U.S. Strategic Command proposed linking its Global Lightning 2007 exercise, which had a global strike focus, with U.S. Pacific Command’s Terminal Fury 2007 exercise. Both were scheduled for late 2006. Global Lightning and Terminal Fury are annual command post exercises sponsored by U.S. Strategic Command and U.S. Pacific Command, respectively, and involve the commanders and their staffs in testing and validating the communications within and between headquarters and simulated forces in deterring a military attack and employing forces as directed. Terminal Fury is partly intended to train the command’s staff in exercising its theater warfighting concept plan and is considered by the commander of the Pacific Command to be the command’s number one priority exercise. The Pacific Command agreed to overlap the two exercises after the command determined there would be only minimal impact on its objectives. However, U.S. Pacific Fleet officials told us that Pacific Command, reluctant to have another command operate forces in its theater, insisted on having control of the forces executing the global strike operation in the exercise. U.S. Strategic Command makes some training on global strike available to its staff and those of other commands and organizations. An official in U.S. Strategic Command’s joint exercise division, who was designated to speak for the command, told us that staffs from U.S. Special Operations, Pacific, and European Commands have attended basic courses on global strike during visits to Strategic Command. The official said that the global strike courses are also available on its Web site on DOD’s classified computer network. Additionally, during the preparation for joint exercises, participating staffs are made aware and encouraged to take the online courses to come up to speed on various areas. However, the command is considering sending staff to other combatant commands to help provide more consistent training. DOD has underway or completed several global strike assessments to identify potential conventional offensive strike weapons systems it may need in the near, mid, and long term, particularly those for prompt global strike. However, DOD has not fully assessed the requirements for various enabling capabilities it needs for global strike or coordinated its efforts to improve these capabilities with potential offensive systems it intends to develop. Enabling capabilities DOD considers critical include intelligence collection and dissemination, surveillance and reconnaissance, command and control, communications, and battlefield damage assessment. Without a full assessment of enabling capabilities, DOD may not make the best decision regarding which enabling capability improvements to pursue to meet global strike operational requirements. While DOD has several analyses underway to determine desired capabilities and identify capability gaps and shortcomings, recent efforts for global strike have largely focused on developing new offensive strike systems that provide improved prompt and long-distance response capabilities. DOD has two major efforts underway to develop potential offensive systems that provide a sea- and land-based prompt global strike capability in the near- and midterm time frames. For the long term, DOD has four key studies underway or completed that are examining potential offensive strike systems to provide global strike capabilities beginning sometime after 2018. To provide a near-term prompt global strike capability, DOD has requested funds to develop the Navy’s conventional Trident modification proposal, which would place conventional warheads on some Trident II ballistic missiles aboard strategic Trident submarines. However, while Navy plans could have the modified missile available around 2011, the proposal has not been fully funded in recent budgets because of congressional concerns over placing conventional missiles on submarines that would also carry missiles equipped with nuclear warheads. Because of these concerns, Congress has also mandated a study by the National Academy of Sciences to review alternative prompt global strike options. The Academy provided the Senate Appropriations Subcommittee on Defense with an interim report in May 2007, which concluded that a single system for prompt global strike was not the best way to proceed in the long term given the uncertainties in the strategic environment and a range of systems that need to developed. The report also concluded that while the conventional Trident missile is not the optimal solution, it offers the only viable prompt global strike capability within the next 6 years. The Academy plans to issue a final report in the spring of 2008. Additionally, in the conference report for the defense fiscal year 2008 appropriations bill, the conferees agreed to provide no funding for testing, fabrication, or deployment of the new conventional Trident missile. The Air Force Space Command is examining a midterm land-based ballistic missile system that would provide a prompt global strike capability and could be available as early as 2015. The proposed conventional strike missile would carry off-the-shelf conventional weapons and may incorporate a new maneuverable weapons delivery system. The Air Force’s preliminary plans would station the conventional strike missile first at Vandenberg Air Force Base in California, which has some preexisting infrastructure that can support the system, and possibly later at Cape Canaveral, Florida. However, several technical, security, and policy issues would need to be resolved before the missile could be fielded, including technological advances in thermal protection systems and resolution of Strategic Arms Reduction Treaty implications. Beginning in fiscal year 2008, the Air Force transferred its funding for prompt global strike to a defensewide account to fund a consolidated, multiservice approach, managed by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. To provide global strike capabilities sometime after 2018, DOD has conducted four global strike capability assessment studies: (1) Next Generation Long-Range Strike Analysis of Alternatives, (2) Nuclear and Conventional Global Strike Missile Study, (3) Prompt Global Strike Analysis of Alternatives, and (4) Global Strike Raid Evaluation of Alternatives. Each is shown in table 2. DOD has completed two of its four long-term studies examining potential offensive strike systems to provide global strike capabilities sometime after 2018. Three of the four studies assess possible offensive strike weapons systems that would provide a prompt and long-range capability for global strike, while the fourth study, the Next Generation Long Range Strike Analysis of Alternatives, examines potential strike systems that could potentially travel great distances to penetrate and loiter deep within an enemy’s territory to deliver high-volume strikes against time-critical targets. Enabling capabilities that DOD considers critical in supporting global strike operations include intelligence collection and dissemination, surveillance and reconnaissance, command and control, communications, and battlefield damage assessment. Planning, executing, and assessing the success of global strike operations may place greater demands on enabling capabilities as new offensive capabilities are acquired. Although the successful conduct of all strike operations depends on enabling capabilities, the nature of global strike operations—such as the potentially long distances over which strike systems may be required to operate, compressed time frames for execution, improved accuracy, the fleeting nature of some global strike targets, and the high-level decision authority required—creates potential operational challenges for these capabilities. Figure 1 shows the role of enabling capabilities in supporting sequential key events in the conduct of strike operations from prior monitoring of the area; initially finding, locating, and identifying a target; executing a strike; to conducting of battlefield damage assessment to determine the success of the strike and whether further actions are required. According to the Defense Science Board’s Report on Future Strategic Strike Forces, current enabling capabilities are not sufficient to fully support the requirements for global strike operations. Current intelligence, surveillance, and reconnaissance and command and control capabilities generally do not provide the persistent coverage, processing and sharing of information, and rapid planning required for compressed global strike time frames, according to U.S. Strategic Command officials. Additionally, Air Force Space Command officials told us that they are concerned about whether current capabilities of intelligence, surveillance, and reconnaissance assets would be able to recognize and assess the damage caused by future global strike systems. For example, future systems may use flechette warheads, which would disperse metal darts upon impact that do not create large craters like traditional explosive devices; therefore, the damage may not be readily visible to intelligence, surveillance, and reconnaissance assets. Further, according to U.S. Air Force officials, current enabling capabilities lack the ability to reliably produce up-to-date accurate and responsive information to strike fleeting targets that can change locations unexpectedly, particularly in areas where U.S. forces may be denied access. Fleeting targets may be difficult to detect or identify with current intelligence, surveillance, and reconnaissance sensors because of the adversary’s use of techniques such as mobility and/or camouflage, concealment, and deception. Therefore, the target must be rapidly engaged before the adversary can employ these techniques and disrupt effective targeting efforts. According to Air Force, Defense Intelligence Agency, and RAND Corporation officials we spoke with, striking mobile and fleeting targets—the most difficult types of targets to strike—requires greater intelligence capabilities than many other types of strike operations to positively identify the target and provide persistent surveillance to track and engage the target. DOD is pursuing several independent efforts to assess and improve enabling capabilities that are critical elements in the pre- and poststrike phases of global strike operations. For example, U.S. Strategic Command has a number of initiatives underway to improve command and control with the goal of providing military planners with a clear understanding of the threat, fast and accurate planning, and tools for timely and efficient decision making. Additionally, U.S. Strategic Command and defense agencies, such as the Defense Intelligence Agency and the Defense Threat Reduction Agency, are exploring initiatives to reduce the time needed to gather information for strike planning and assessments by increasing available intelligence, surveillance, and reconnaissance capabilities. For example, to be able to quickly assess battle damage, the Defense Threat Reduction Agency and the Defense Advanced Research Projects Agency are exploring the idea of dispensing intelligence, surveillance, and reconnaissance sensors from future prompt global strike platforms, such as the proposed conventional strike missile, around target areas shortly before the release of its weapon. Recent DOD studies to identify potential offensive strike systems for global strike provide only limited assessments of the enabling capabilities needed for a particular focus of global strike or a particular weapons system and do not collectively provide a complete assessment of enabling capabilities needed to support global strike operations. Joint Staff officials who are conducting the Global Strike Raid Evaluation of Alternatives study said they plan to assess the enabling capabilities as an important step in understanding all of the capabilities needed to support global strike operations. However, the global strike raid study will only analyze the use of global strike as a limited strike capability against time-critical targets and will not examine its use in all aspects of major combat operations. Similarly, the Nuclear and Conventional Global Strike Missile Study only examined enabling capabilities needed for the future conventional and nuclear land-based ballistic missile options considered in its assessment. However, the National Academy of Sciences, recognizing the importance and greater demand that global strike would place on enabling capabilities, plans to include an assessment of global strike capabilities in its congressionally mandated spring 2008 final report on conventional prompt global strike. Global strike operations can increase the demand for enabling capabilities depending on the threat and the target to be attacked. For example, conducting strikes against mobile delivery systems for weapons of mass destruction poses one of the most dangerous and elusive threats for global strike operations. Defense Threat Reduction Agency officials told us that they rely on enabling capabilities to provide the information needed to locate the target and guide the weapons system to strike with accuracy within compressed time frames, while minimizing any potential collateral effects. Moreover, the intelligence needed for planning and executing strikes against mobile delivery systems for weapons of mass destruction is currently limited or incomplete, according to Defense Threat Reduction Agency officials. Several DOD and Air Force officials we spoke with said that enabling capabilities were not being fully considered to the extent needed in global strike system studies. According to a DOD Program Analysis and Evaluation official, who has responsibility for global strike issues, both of the Air Force’s analyses of alternatives studies—i.e., prompt global strike and next generation long-range strike—had methodological weaknesses because neither assessed the enabling capabilities required for conducting global strike operations. Instead, the teams conducting the two studies assumed that certain needed improvements in enabling capabilities, such as intelligence, surveillance, and reconnaissance, would be available when any future system is fielded. The scope and range of enabling capabilities that could be assessed in the studies were limited because of the need to obtain special security clearances, according to U.S. Strategic Command and Air Force Space Command officials. Similarly, the Global Strike Raid Evaluation of Alternatives study was delayed for several months because of difficulties obtaining special access clearances needed to review enabling capability development efforts across DOD. Air Force officials responsible for conducting the Prompt Global Strike Analysis of Alternatives stated that an assessment of needed enabling capabilities should be done to complement their study. However, the officials did not know of any such assessment of enabling capabilities being conducted. The Air Force officials said that their analysis does not completely address enabling capabilities because (1) an assessment of enabling capabilities was not the focus of their analysis, (2) the analysis work required to assess offensive systems for their study alone is expected to take 2 years, (3) the study staff lacks the special access clearances required to obtain information on all DOD efforts for improving enabling capabilities, and (4) the services submitting proposals for potential prompt global strike systems wanted to limit their cost estimates to the weapon system only. Furthermore, the analyses conducted for the conventional Trident missile and conventional strike missile proposals have not fully included assessments of required enabling capabilities. According to Joint Staff officials we spoke with, the analyses conducted for the Navy’s conventional Trident missile proposal did not fully consider intelligence capabilities and requirements. As a result, the intelligence, surveillance, and reconnaissance capabilities needed to support this potential global strike system, which are currently in limited availability, may not be in place since an analysis of enabling capabilities has not yet been performed for it. Air Force Space Command officials developing the conventional strike missile told us that they have yet to perform an analysis of the enabling capabilities that potential strike systems would require. Additionally, DOD has not coordinated all of its efforts to improve enabling capabilities with its assessments for new offensive global strike systems. Because DOD has not fully assessed the enabling capabilities required or coordinated various department efforts to improve enabling capabilities alongside its plans for future strike systems, it may not have all of the key enabling capabilities in place when needed to support new offensive capabilities if and when they are funded. For example, Defense Advanced Research Projects Agency officials told us that the agency recognizes that such efforts as its Rapid Eye program, which is examining concepts for an aircraft that would arrive within hours in an emerging area of interest to provide a limited persistent intelligence, surveillance, and reconnaissance capability, could potentially fill gaps in enabling capabilities needed for global strike. Nevertheless, the officials said that DOD has not yet recognized the importance of coordinating these efforts with ongoing offensive global strike system assessments to better understand the range of enabling capabilities being developed and their estimated availability. DOD has taken some important first steps to formulate a strategy for improving the integration of future intelligence, surveillance, and reconnaissance requirements through the development of its Intelligence, Surveillance, and Reconnaissance Integration Roadmap. However, as we previously reported in 2007, the roadmap does not define requirements for global persistent surveillance; clarify what intelligence, surveillance, and reconnaissance requirements are already filled; identify critical gaps as areas for future focus; or otherwise represent an enterprise-level architecture of what the intelligence, surveillance, and reconnaissance enterprise is to be for future operations, such as global strike. Since DOD has not fully assessed the required enabling capabilities or coordinated various department efforts to improve enabling capabilities, such as intelligence, surveillance, and reconnaissance and command and control, for future strike systems, DOD might not make the best decisions regarding which enabling capabilities to pursue. As a result, the effectiveness of these new offensive capabilities against critical high-value targets may be limited when initially fielded. While DOD plans investments in a range of global-strike-related capabilities, it has not yet begun to develop a prioritized investment strategy that considers the breadth of current efforts and future plans to develop capabilities for global strike, integrates these efforts to assess global strike options, and makes choices among alternatives in light of the department’s long-term fiscal challenges. Such a strategy would initially capture currently planned investments and would be refined and updated as DOD further develops its concept and identifies additional capabilities. Our prior work has shown that a long-term and comprehensive investment approach is an important tool in an organization’s decision-making process to define direction, establish priorities, assist with current and future budgets, and plan the actions needed to achieve goals. DOD studies and officials have identified a need for a broad, holistic view of global strike development that captures and gives visibility to all its efforts— proposed or underway—for increasing both offensive and enabling global strike capabilities. However, DOD has not fully assessed its FYDP to determine the extent to which current development programs, projects, and activities could contribute to global strike capabilities or explained how it plans to link its long-term studies to identify potential offensive weapons systems for global strike that will result in a comprehensive prioritized investment strategy. Ongoing DOD initiatives examining portfolio management approaches to manage selected groupings of investments could help DOD in developing a comprehensive prioritized investment strategy for global strike. Our prior work has shown that developing a long-term, comprehensive investment strategy provides an organization with an important tool in its decision-making process to define direction, establish priorities, assist with current and future budgets, and plan the actions needed to achieve goals. This strategy is intended to be a dynamic document, which would be refined and updated to adapt to changing circumstances. Such a strategy addresses needs, capabilities gaps, alternatives, and affordability, and includes information on future investment requirements, projected resources, investment priorities and trade-offs, milestones, and funding timelines. It allows an organization to address requirements on an enterprisewide, or departmentwide, basis and provides a means to evaluate the efficacy and severity of capability gaps or, alternatively, areas of redundancy. Without a long-term, comprehensive prioritized investment strategy, it is difficult to fully account for and assess real and potential contributions from other current and future weapons and supporting systems providing similar capabilities, mitigate capability shortfalls and eliminate duplication, and allocate scare funds among a range of priorities. Various DOD officials we spoke with recognize the need for DOD to have a broad, holistic view of global strike development that captures and gives visibility to all its efforts—proposed or underway—for increasing both offensive and enabling global strike capabilities. DOD, however, has yet to perform a comprehensive assessment to identify and track all potential global-strike-related efforts in its FYDP. An official in DOD’s Office of Program Analysis and Evaluation, who has responsibility for global strike issues, told us that his office tracks several significant FYDP programs that have specific global strike application, such as the Conventional Trident Modification, Common Aero Vehicle, and Falcon programs. The U.S. Strategic Command, according to command officials, informally tracks global-strike-related programs through DOD-wide conferences and periodic meetings with various contractors that are working on global- strike-related technology efforts. Additionally, in February 2007, the U.S. Strategic Command sponsored a prompt global strike technology conference to identify ongoing research, development, test, and evaluation efforts being conducted by the services, DOD laboratories, and defense agencies that would support development of prompt global strike capabilities. While DOD organizations have conducted some assessments of global strike capabilities in the FYDP, they have not conducted a comprehensive assessment of the FYDP to manage and track DOD’s global-strike-related investments in conventional offensive and enabling capabilities. For example, according to an office official who has responsibility for global strike issues, DOD’s Office of Program Analysis and Evaluation has not determined the full range and status of science and technology development efforts with potential global strike application in the FYDP. As we reported in 2005, DOD’s Program Analysis and Evaluation office conducted a limited analysis of the FYDP and related budget documents and internal reviews to identify the range of New Triad spending, including spending for global strike. However, Program Analysis and Evaluation officials told us that their analysis, which has not been updated, did not attempt to capture all of the potential global-strike-related development efforts in the FYDP. One Program Analysis and Evaluation official said that if a comprehensive assessment of all global-strike-related development efforts was conducted, it might show that existing systems could provide the high volume and compressed time required for prompt global strike with only limited investments in enabling and offensive capabilities. This lack of complete knowledge about how existing systems could be adapted to meet global strike requirements underscores the need for a more holistic assessment of DOD’s efforts related to global strike. The U.S. Strategic Command also has not conducted a comprehensive assessment of global strike investments that included DOD’s FYDP. For example, the Strategic Command’s 2007 prompt global strike technology summit did not fully capture development of offensive global strike technology or enabling capabilities, such as command and control, intelligence, and surveillance and reconnaissance. One of the summit’s purposes was to inform and raise the awareness of prompt global strike technology development at the service laboratories and defense agencies. According to a Strategic Command official, however, the summit focused only on those efforts that could improve offensive kinetic global strike capabilities. Given that DOD has not conducted a comprehensive assessment of its FYDP for global-strike-related investments, we performed an analysis of FYDP program elements in the President’s fiscal year 2008 budget submission to Congress to identify the range of potential global-strike- related research and development efforts. We established criteria and a list of key terms to use in our assessment from a review of descriptions, terms, and characteristics used by DOD in its principal global strike documents, including the Global Strike Joint Capabilities Document and Deterrence Operations Joint Operating Concept, and information obtained in discussions with DOD officials. Such an analysis would need to be conducted in developing a comprehensive prioritized investment strategy for global strike. Other global strike assessments of the FYDP programs, projects, and activities may determine different criteria and methodologies to use and thus, may yield different results. In our analysis, we identified 94 FYDP program elements in the fiscal year 2008 budget request that would provide funding for 135 programs, projects, and activities to develop conventional offensive and enabling capabilities having possible application for global strike. Of the 135 programs, projects, and activities we identified in our analysis: 85 would improve offensive capabilities, including efforts to improve kinetic weapons, nonkinetic weapons, and propulsion systems; 41 would improve enabling capabilities such as command, control, communications and computers and surveillance and reconnaissance systems; and 9 would improve both offensive and enabling capabilities such as predator development. Also, we determined that 13 of the 135 programs, projects and activities, such as the Air Force’s Common Aero Vehicle program, were exclusively for research and development of global strike capabilities. The remaining 122 programs, projects, and activities support research and development of offensive and enabling capabilities with potential application for global strike operations. While the programs, projects, and activities we identified in our analysis are largely directed at developing capabilities for a wide range of military needs other than just global strike, these efforts reflect substantial near- term investments of several billions dollars in capabilities that could potentially be used in conducting future global strike operations. Appendix II summarizes the results of our analysis to identify global strike and related development in DOD’s FYDP. DOD officials also have not clearly explained whether DOD plans to integrate the results of its four global strike studies to identify potential weapons systems into a comprehensive prioritized investment strategy. Additionally, none of the four studies would provide a roadmap that shows DOD’s plans and schedules for developing and acquiring the full range of strike and enabling capabilities identified for global strike. For example, both of the Air Force’s analyses of alternatives for prompt global strike and next generation long-range strike will provide investment information as a part of their final products, but that information will be limited to life- cycle costs for the preferred weapons system solution and will not address any needed investments required for enabling capabilities. Similarly, DOD also plans to provide investment information in its Nuclear and Conventional Global Strike Missile Study and the Global Strike Raid Evaluation of Alternatives. However, DOD intends to prepare cost estimates only for capabilities required for the future ballistic missile solutions identified in the Nuclear and Conventional Global Strike Missile Study. Additionally, while DOD plans to review the full range of global- strike-related offensive and enabling capabilities in its Global Strike Raid Evaluation of Alternatives study, it only intends to provide possible investment options for offensive strike capabilities. The use of portfolio management, a best business practice, could help DOD in developing a prioritized investment strategy for global strike. Portfolio management is used to manage selected groupings of investments, or portfolios, at the enterprise level to collectively align investments with strategic goals and performance measures and provide a sound basis to justify the commitment of resources. In our March 2007 report examining the use of the portfolio management approach to improve DOD’s ability to make weapon system investment decisions, we determined that although the military services fight together on the battlefield as a joint force, they identify needs and allocate resources separately, using fragmented decision-making processes that do not allow for an integrated portfolio management approach like that used by successful commercial companies. Through portfolio management, an organization can explicitly assess the trade-offs among competing investment opportunities in terms of their benefit, costs, and risks. Investment decisions can then be made based on a better understanding of what will be gained or lost through the inclusion or exclusion of certain investments. Use of portfolios in investment planning, according to DOD, could improve its efforts to increase interoperability, minimize redundancies and gaps, and maximize capability effectiveness. As part of its Defense acquisition transformation, DOD is examining the use of portfolio management and has begun two initiatives—concept decision and capability portfolio management—within the past year that focus on the use of portfolio management approaches to manage capability investments in a mission area. If either is successful, these approaches could benefit DOD’s management and tracking of its global strike investments. The concept decision initiative is using four pilot studies that apply portfolio management techniques and other tools to merge information on requirements, technology maturity, and available resources to improve the range of choices for strategic investment decision making. If successful, the pilots would ensure that DOD is making investment choices that balance operational and programmatic risks, are affordable, and can be successfully developed, produced, fielded, and maintained within planned funding levels. DOD plans to complete each of the four pilots by May 2008. The other initiative— capability portfolio management—is to investigate approaches to consider investment trades across previously stove-piped areas, and to better understand the implications of investment decisions across competing priorities. For example, senior decision makers, if the approach is successful, would be able to determine the implications of additional investments in prompt global strike with investments for joint command and control. Viewing capabilities across the entire portfolio of assets, according to the 2006 Quadrennial Defense Review Report, enables decision makers to make informed choices about how to reallocate resources among previously stove-piped programs and hence to deliver needed capabilities to the joint force more rapidly and efficiently. DOD and U.S. Strategic Command officials involved with the Global Strike Raid Evaluation of Alternatives said that formulating portfolio options and making investment trade-offs for global strike will be difficult, because few of the capabilities are uniquely for global strike. However, DOD officials in the Office of the Secretary of Defense we spoke with stated that managing future global strike development as a portfolio of capabilities could result in more effective development of this mission area. Officials who are involved with the DOD concept decision pilot studies stated that a broader look at all related capabilities would likely increase the extent of improvements that could be made for the mission area when compared with a more limited look at solutions available from a single service or functional area. While DOD has taken a number of steps to advance its global strike concept and assign responsibilities, its ability to implement the concept will be limited among key stakeholders until it more clearly defines global strike, begins incorporating global strike into joint doctrine, increases outreach and communication activities, and involves stakeholders to a greater extent in joint exercises and other training. Without a complete and clearly articulated concept that is well communicated and practiced with key stakeholders, DOD could encounter difficulties in fully implementing its concept and building the necessary relationships for carrying out global strike operations. DOD has begun to identify a range of potential conventional offensive weapons systems to provide global strike capabilities. However, without fully assessing the requirements for various enabling capabilities that DOD considers critical to the success of global strike operations and coordinating its efforts to improve these capabilities with potential offensive systems it intends to develop, DOD may not have the enabling capabilities it needs to support new offensive capabilities, if and when they are funded. Similarly, without fully assessing the breadth of capabilities and technologies being developed within its FYDP that potentially contribute to global strike, DOD does not have the complete information it needs to track and manage its capability development efforts and develop a prioritized long-term investment strategy for global strike. We recommend that the Secretary of Defense take the following four actions to strengthen DOD’s efforts to implement its global strike concept and improve communications and mutual understanding within DOD of the scope, range and use of capabilities, and the incidence of global strike operations: Direct the Under Secretary of Defense for Policy, in consultation with the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Chairman of the Joint Chiefs of Staff, and the Commander, U.S. Strategic Command, to develop and approve a common, universally accepted joint definition for “global strike,” and consistently incorporate this definition in global strike documents and joint doctrine. Direct the Chairman of the Joint Chiefs of Staff and the Commander, U.S. Joint Forces Command, in consultation with the Under Secretaries of Defense for Acquisition, Technology, and Logistics and Policy and the Commander, U.S. Strategic Command, to determine possible changes to existing joint doctrine or development of new joint doctrine that may be required to incorporate global strike operations, including the terminology and discussion of training, relationships, responsibilities, and processes for these operations, and initiate any subsequent doctrine development activities. Direct the Commander, U.S. Strategic Command, in consultation with the Chairman of the Joint Chiefs of Staff and the Under Secretaries of Defense for Acquisition, Technology, and Logistics and Policy, to establish an ongoing communications and outreach approach for global strike to help guide DOD’s efforts to promote, educate, and foster acceptance of the concept among the combatant commands, military services, and other DOD organizations. Direct the Commander, U.S. Strategic Command, in consultation with the Commander, U.S. Joint Forces Command, to identify additional opportunities where global strike can be integrated into major joint exercises and other training activities. We further recommend that the Secretary take the following four actions to provide the most complete information on the range of capabilities needed for global strike and to determine an affordable and sustainable balance in its spending for current and future global strike investments. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the Chairman, Joint Chiefs of Staff, the Commander, U.S. Strategic Command, and the Secretaries of the Army, Navy, and Air Force, to conduct a comprehensive assessment of enabling capabilities to identify (1) any specific global strike operational requirements and priorities, (2) when these capabilities are needed to support future offensive strike systems, and (3) what plans DOD has for developing and acquiring these capabilities. DOD should link this assessment with other assessments examining potential strike systems for global strike and those being conducted for any specific supporting capability area to ensure that it has the most complete information available when making decisions on future global strike investments. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the Secretaries of the Army, Navy, and Air Force, to provide guidance on how the results of DOD studies to identify potential strike systems for global strike will be integrated into a comprehensive prioritized investment strategy for global strike, including a roadmap that shows the department’s plans and schedules for developing and acquiring offensive strike and enabling capabilities. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the Director, Office of Program Analysis and Evaluation and the Vice Chairman, Joint Chiefs of Staff, to perform a comprehensive review of all capabilities being developed within DOD’s Future Years Defense Program to determine the extent to which these capabilities contribute or can be leveraged for global strike and incorporate the results of this review into the development of a comprehensive prioritized investment strategy for global strike. The investment strategy should be updated, as needed, to adapt to changing circumstances. Direct the Deputy Secretary of Defense, in consultation with the Deputy’s Advisory Working Group, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and Director for Program, Analysis, and Evaluation, to determine the appropriateness of using a portfolio management approach for global strike to align its investments with strategic goals and performance measures and provide a sound basis to justify the commitment of resources, develop a prioritized investment strategy, and manage development and acquisition of global strike capabilities. In written comments on a draft of this report, signed by the Director, Joint Advanced Concepts, Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), DOD agreed with the report and with all of its eight recommendations. The department’s comments are discussed below and are reprinted in appendix III. DOD concurred with our four recommendations intended to strengthen the department’s efforts to implement its global strike concept and improve communications and mutual understanding within DOD of the scope, range, and use of capabilities, and the incidence of global strike operations. Specifically, DOD concurred with our recommendations to (1) develop and approve a common, universally accepted joint definition for “global strike,” and consistently incorporate this definition in global strike documents and joint doctrine; (2) determine possible changes to existing joint doctrine or development of new joint doctrine that may be required to incorporate global strike operations; (3) establish an ongoing communications and outreach approach for global strike; and (4) identify additional opportunities where global strike can be integrated into major joint exercises and other training activities. DOD stated that the Commander, U.S. Strategic Command, in consultation with the Under Secretary of Defense for Policy, the Under Secretary for Acquisition, Technology, and Logistics, and the Chairman of the Joint Chiefs of Staff, would develop a common, universally accepted concept and definition for “global strike.” DOD also stated that global strike, as a validated and executable concept, had not matured to the point that it is an extant executable capability, which DOD considers a prerequisite for incorporating global strike into joint doctrine. According to the department, when the concept is fully developed and validated, the U.S. Joint Forces Command will prepare the appropriate doctrine or determine possible changes in existing doctrine. While these are positive steps, we continue to believe that DOD can and should take additional steps now to facilitate the development of joint doctrine. For example, DOD should establish a time soon for completing development and reaching approval of its global strike concept and definition and incorporating the approved concept and definition in department documents. Reaching agreement on the concept and definition is also important as DOD moves ahead with its decisions on new investments in weapons systems and other capabilities for global strike and continues implementation of the concept among key stakeholders. In regard to our recommendations that U.S. Strategic Command establish an ongoing communications and outreach approach for global strike and identify additional opportunities where global strike can be integrated into major joint exercises and other training activities, DOD stated that the socialization of evolving concepts contributes to their maturing and validation and that it is U.S. Strategic Command’s responsibility, with support and assistance from the U.S. Joint Forces Command, to establish its training requirements and objectives for global strike. Considering the different interpretations of global strike we found among combatant command and service officials, we continue to believe that our recommendations, when fully implemented, would strengthen the positive actions currently being taken by the U.S. Strategic Command to conduct outreach and include global strike in major exercises and other training activities; promote greater understanding, involvement, and experience among these key stakeholders; and further DOD’s efforts to implement the global strike concept. In taking actions to implement our recommendations, for example, we believe that the Strategic Command could begin by consulting with combatant command and service stakeholders to identify opportunities to increase and enhance the command’s current outreach activities (e.g., visits, briefings, and education) and include additional global strike segments in major exercises and other training activities. DOD also concurred with our four recommendations intended to provide more complete information on the range of capabilities needed for global strike and to determine an affordable and sustainable balance in its spending for current and future global strike investments. Specifically, DOD concurred with our recommendations to (1) conduct a comprehensive assessment of enabling capabilities (intelligence collection and dissemination, surveillance and reconnaissance, command and control, communications, and battlefield damage assessment); (2) provide guidance on how the results of its studies to identify potential strike systems for global strike would be integrated into a comprehensive prioritized investment strategy for global strike; (3) perform a comprehensive review of all capabilities being developed within DOD’s FYDP to determine the extent to which these capabilities contribute or can be leveraged for global strike; and (4) determine the appropriateness of using a portfolio management approach for global strike. DOD’s responses to our recommendations largely focus on conventional prompt global strike, which is a subset of the broader global strike mission area. In regard to enabling capabilities, DOD stated that its departmentwide capability portfolio management provides the means to optimize capabilities through the integration, coordination, and synchronization of department investments. Managers of the individual capability portfolios are responsible for identifying those aspects of their portfolios that are connected to more than one portfolio because of the breadth and depth of mission areas such as prompt global strike. According to DOD, as part of its comprehensive assessment for conventional prompt global strike, it intends to include ongoing and follow-on studies, such as the Air Force-led prompt global strike analysis of alternatives, in identifying operational requirements and priorities to determine when they are needed to support development of future offensive strike systems. DOD also stated that it plans to use its fiscal year 2008 Defense-wide Research, Development, Testing, and Evaluation account for prompt global strike to provide limited funding for mission-enabling capabilities. In regard to guidance for integrating the results of its long-term global strike studies, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics will provide guidance for developing a comprehensive prioritized investment strategy and roadmap. It stated that for conventional prompt global strike in fiscal year 2008 the department will pursue an integrated approach in crafting this investment strategy, which will emphasize the application of ongoing and follow-on studies, including the Air Force-led prompt global strike analysis of alternatives and the congressionally- mandated National Research Council’s Committee on Conventional Prompt Global Strike Capability report provided by the National Academy of Sciences, and reference the evolving operational requirements and constraints described by U.S. Strategic Command and validated by the Joint Staff. DOD stated that its effort will also emphasize full utilization and collaboration with separately funded programs throughout DOD and the Department of Energy that potentially support conventional prompt global strike and cross-service and agency transparency and collaboration of all technology and experimentation matters. Concerning our recommendation to identify FYDP capabilities that could contribute or be leveraged for global strike, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics would lead a comprehensive, capability-based review and prioritization of the global strike investment strategy within the FYDP. According to DOD, the goal of the FYDP for fiscal years 2008 through 2013 is to apply, advance, and demonstrate engineering for the selection and development of material solutions for the conventional prompt global strike mission area so that individual service acquisition programs can be funded and executed. DOD stated that it plans to submit a conventional prompt global strike research and development testing plan to Congress in April 2008, as required by the fiscal year 2008 National Defense Authorization Act. This plan will describe the strategy and investment needed over the next 5 years to develop and field full-mission prototypes. And lastly, in regard to our recommendation on portfolio management, DOD stated that with the creation of the Defense-wide Research, Development, Test, and Evaluation program element for prompt global strike in the President’s 2009 budget, a portfolio management approach is being initiated. DOD further stated that the department fully supports using a portfolio management approach for conventional prompt global strike to align its investments with strategic goals and performance measures and provide a sound basis to justify the commitment of resources. The specific actions that DOD described in its comments for these four recommendations are positive steps in providing greater focus, transparency, and accountability for the department’s efforts to increase global strike capabilities. We are sending electronic copies of this report to interested congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; and the Commander, U.S. Strategic Command. We will also make electronic copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-4402 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix IV. To identify whether the Department of Defense (DOD) has clearly defined and instilled a common understanding and approach to its global strike mission, we reviewed relevant global strike concept documents, studies, reports, briefings, and other pertinent documents to determine the scope, capabilities, range of operations, types of targets, doctrine, and other factors that make up the global strike concept and identify the definitions that are used throughout DOD to define the term “global strike.” For example, we reviewed the April 2006 Global Strike Joint Capabilities Document, a key document that identifies the set of capabilities required across all functional areas to accomplish the global strike mission, to obtain information on current global strike capabilities and shortfalls. Additionally, we reviewed various DOD guidance documents to identify assigned roles and responsibilities for global strike, including concept development, implementation, and operations. We reviewed, for instance, the most recent 2006 Unified Command Plan, which establishes the missions and responsibilities, geographic areas of responsibilities, and functions for the commanders of the combatant commands, to identify the roles and responsibilities for the U.S. Strategic Command and the respective geographic combatant commands related to the global strike operations. We also met with officials from the Office of the Secretary of Defense; U.S. Joint Forces Command; U.S. Central Command; U.S. Special Operations Command; U.S. Pacific Command; U.S. Strategic Command; the Air Force, Army, and Navy headquarters and commands; and Defense Threat Reduction Agency to obtain information on various global strike areas such as roles and responsibilities, the global strike concept and its implementation, and joint doctrine. With these officials, particularly the geographic combatant commands, we also discussed their participation and inputs into relevant global strike exercises, training, and relative educational activities and with communication strategy used by the U.S. Strategic Command to explain and promote understanding of global strike operations and its mission responsibilities. Additionally, we met with officials from the U.S. Strategic Command to discuss challenges faced by the command and DOD in developing and implementing the global strike concept and communicating the concept to the combatant commands and other relevant entities within DOD. To assess the extent to which DOD has assessed and developed capabilities needed for global strike, we reviewed the study plans, supporting and relevant documentation, and final reports, if available, for DOD’s four principal global strike assessments—Next Generation Long- Range Strike Analysis of Alternatives; Nuclear and Conventional Global Strike Missile Study; Prompt Global Strike Analysis of Alternatives; and Global Strike Raid Evaluation of Alternatives—to identify potential conventional offensive strike weapons systems it may need in the near, mid, and long term. We discussed these assessments with officials at the Air Combat Command, U.S. Strategic Command, U.S. Air Force headquarters, Air Force Space Command, Joint Staff, and other lead and supporting organizations that were participants or had knowledge about the assessments. In discussing the ongoing Prompt Global Strike Analysis of Alternatives, for example, with officials at the Air Force Space Command at Colorado Springs, Colorado, we obtained documentation of the assessment, including its methodology, scope, assumptions, and schedule, as well as the organizations involved and the status of work to date. For each of the four major studies, we also examined the extent to which DOD has considered the requirements for enabling capabilities, such as intelligence and command and control, and their importance in achieving desired mission effectiveness. We reviewed studies and assessments on enabling capabilities from various organizations such as RAND Corporation, the Air Force, the Defense Intelligence Agency, the Defense Threat Reduction Agency, and U.S. Strategic Command, and discussed the information with officials from each of these organizations. We also reviewed our prior work, including our recent report on DOD’s approach to managing requirements for intelligence, surveillance, and reconnaissance capabilities, to determine how DOD has coordinated and integrated its efforts to improve enabling capabilities. Additionally, we reviewed the Defense Science Board’s 2004 report on Future Strategic Strike Forces to obtain their assessment of enabling capabilities requirements and recommendations for future strategic strike systems. In our discussions with officials at various combatant commands—such as U.S. Strategic Command, U.S. Pacific Command, the Defense Intelligence Agency, the Defense Threat Reduction Agency, and military services—we obtained information on the roles and requirements for enabling capabilities in support of global strike systems and availability and shortfalls for these capabilities. To assess the extent to which DOD has identified the funding requirements and developed an investment strategy for acquiring new global strike capabilities, we obtained and analyzed information and interviewed officials within the Office of Secretary of Defense, including the Office of Program Analysis and Evaluation, the Defense Science Board, the Hypersonics Joint Technology Office, and the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Joint Chiefs of Staff, and U.S. Strategic Command. We documented DOD’s research and development efforts with possible application to global strike and investment information provided in ongoing and completed studies on potential global strike weapons systems. Additionally, we reviewed reports and studies and interviewed officials at the Joint Chiefs of Staff, the Defense Science Board, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and GAO to determine how DOD initiatives, particularly for portfolio management, could be used to manage global strike investments. We also obtained information on DOD’s efforts to identify funding requirements and develop an investment strategy for global strike. We conducted an analysis of the Future Years Defense Program (FYDP) that supports the President’s fiscal year 2008 budget submission to Congress to determine the range of programs, projects, and activities within various research and development program elements in the FYDP that could have potential application for improved conventional global-strike-related capabilities. To establish criteria and create a list of key terms to use in conducting our assessment, we reviewed the descriptions, terms, and characteristics used by DOD in its principal documents describing global strike characteristics, including the Global Strike Joint Capabilities Document, Global Strike Joint Integrating Concept, and Deterrence Operations Joint Operating Concept, and information obtained in discussions with knowledgeable DOD, combatant command, defense agency, and service officials. We then reviewed supporting research and development budget submission documents from all the military services, the Office of the Secretary of Defense, two defense agencies, and Special Operations Command. We also discussed our analysis with an official from DOD’s Office of Program Analysis and Evaluation, who generally concurred that our methodology and results were sound and reasonable. Other global strike assessments of the FYDP programs, projects, and activities may determine different criteria and methodologies to use and, hence, may yield different results. Our assessment also does not include those programs, projects, and activities in any classified program elements or data from nuclear systems development. It also includes some, but not all, nonkinetic capabilities that could contribute to improving global strike. We conducted this performance audit from November 2006 to February 2008 in accordance with generally accepted government auditing standards. In conducting our work, we contacted officials at several DOD organizations and agencies; joint combatant and service commands; and think-tank organizations. Table 3 shows the organizations and offices we contacted during our review. We conducted an analysis of the Future Years Defense Program (FYDP) that supports the President’s fiscal year 2008 budget submission to Congress to determine the range of programs, projects, and activities within various research and development program elements in the FYDP that could have potential application for improved conventional global- strike-related capabilities. We established criteria and a list of key terms to use in our assessment from a review of descriptions, terms, and characteristics used by the Department of Defense (DOD) in its principal global strike documents, including the Global Strike Joint Capabilities Document and Deterrence Operations Joint Operating Concept, and information obtained in discussions with DOD officials. While our methodology and results were discussed with a DOD Office of Program Analysis and Evaluation official and were determined to be reasonable and relevant, other global strike assessments of the FYDP programs, projects, and activities may determine different criteria and methodologies to use and therefore, may yield different results. Additionally, our assessment does not include those programs, projects, and activities in any classified program elements or data from nuclear systems development. It also includes some, but not all, nonkinetic capabilities that could contribute to improving global strike. Our analysis of research and development budget submission documents from a number of DOD organizations identified 94 FYDP program elements in the fiscal year 2008 budget request related to global strike. The 94 FYDP program elements provide funding for 135 programs, projects, and activities that are developing conventional offensive strike and enabling capabilities that could contribute to improved global strike capabilities. Of the 135 programs, projects, and activities identified in our analysis: 85 would improve offensive capabilities, including efforts to improve kinetic weapons, nonkinetic weapons, and propulsion systems; 41 would improve enabling capabilities such as (1) command, control, communications, and computers and (2) surveillance and reconnaissance systems; and 9 would improve both offensive and enabling capabilities such as Predator development. Table 3 summarizes the results of our analysis to identify global strike and related development by category and type of offensive, enabling, or multiple capabilities in DOD’s FYDP. Of the 135 programs, projects, and activities, we determined that 13, such as the Air Force’s common aero vehicle, were exclusively for research and development of global strike capabilities. The remaining 122 programs, projects, and activities support research and development of offensive and enabling capabilities that were not specifically for global strike but had potential application for global strike operations. In conducting our analysis, we reviewed the research and development budget submissions from the Departments of the Air Force, Navy, and Army; Office of the Secretary of Defense; Defense Advanced Research Projects Agency; Defense Threat Reduction Agency; and U.S. Special Operations Command. Figure 2 shows that the majority (88) of the 135 research and development programs, projects, and activities we identified were in the budgets of the services, with the Department of the Air Force budget having the largest number (48) among the three services. The remaining 47 programs, projects, and activities were in the budgets of the Defense Threat ReductionAgency (5); Special Operations Command (6); Office of the Secretary of Defense (17); and Defense Advanced Research Projects Agency (19). The programs, projects, and activities we identified in our analysis are largely directed at developing capabilities for a wide range of military needs other than just global strike and their associated funding, and therefore should not be considered when determining DOD’s total spending for global strike. However, these efforts reflect substantial near- term investments of several billions of dollars in capabilities that could potentially be used for future global strike operations. For example, DOD plans to spend about $4.8 billion then-year dollars in fiscal years 2007 through 2009 for the 29 weapon platforms programs, projects, and activities we identified, and about $2.6 billion for other offensive capabilities including kinetic weapons, nonkinetic weapons, and propulsion system programs, projects, and activities over the same period. Additionally, DOD plans to spend about $3.0 billion then-year dollars in fiscal years 2007 through 2009 for the 41 programs, projects, and activities we identified to improve enabling capabilities. And lastly, DOD plans to spend about $0.7 billion then-year dollars for the 9 programs, projects, and activities included in our analysis for multiple capabilities over the period. In addition to the individual named above, Gwendolyn R. Jaffe, Assistant Director; Lisa M. Canini; Grace A. Coleman; David G. Hubbell; Jason E. Porter, Sr.; and Mark J. Wielgoszynski, Analyst-in-Charge, made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | To increase the range of options available to the President, the Department of Defense (DOD) is taking steps to develop a portfolio of capabilities, referred to as global strike, to rapidly plan and deliver limited duration and extended range precision strikes against highly valued assets. GAO was asked to assess (1) whether DOD has clearly defined and instilled a common understanding and approach for global strike throughout the department, (2) the extent to which DOD has developed capabilities needed for global strike, and (3) the extent to which DOD has identified the funding requirements and developed an investment strategy for acquiring new global strike capabilities. GAO reviewed and analyzed plans and studies within DOD, the services, and several commands on global strike implementation and capabilities development. DOD has taken a number of steps to implement its global strike concept and has generally assigned responsibilities for the planning, execution, and support of global strike operations. However, key stakeholders, particularly the geographic combatant commanders, have different interpretations of the concept, scope, range, and potential use of capabilities needed to implement global strike. Several factors affect the understanding and communication of DOD's global strike concept among key stakeholders, including the extent to which DOD has (1) defined global strike, (2) incorporated global strike into joint doctrine, (3) conducted outreach and communication activities with key stakeholders, and (4) involved stakeholders in joint exercises and other training involving global strike. GAO's prior work examining successful organizational transformations shows the necessity to communicate to stakeholders often and early with clear and specific objectives on what is to be achieved and what roles are assigned. Without a complete and clearly articulated concept that is well communicated and practiced with key stakeholders, DOD could encounter difficulties in fully implementing its concept and building the necessary relationships for carrying out global strike operations. DOD has underway or completed several global strike assessments to identify potential conventional offensive strike weapons systems, particularly those for prompt global strike, which would provide capabilities sometime after 2018. However, DOD has not fully assessed the requirements or coordinated improvements for related enabling capabilities that are critical to the planning and execution of successful global strike operations. These critical enabling capabilities include intelligence collection and dissemination, surveillance and reconnaissance, and command and control, communications, and battlefield damage assessment. Furthermore, DOD has not coordinated its efforts to improve these capabilities with potential offensive systems it intends to develop. Without fully assessing the enabling capabilities required or coordinating with other DOD studies, DOD might not make the best decision of which enabling capability to pursue in meeting global strike requirements. DOD has not yet established a prioritized investment strategy that integrates its efforts to assess global strike options and makes choices among alternatives given the department's long-term fiscal challenges. GAO's prior work has shown that a long-term and comprehensive investment approach is an important tool in an organization's decision-making process to define direction, establish priorities, assist with current and future budgets, and plan the actions needed to achieve goals. While DOD studies and officials recognize a need for a broad, holistic view of global strike development, DOD has not identified and assessed all global-strike-related capabilities and technologies and has not explained how its plans to link long-term studies to identify potential weapons systems will result in a comprehensive prioritized investment strategy for global strike. |
Defense, like the rest of the government and the private sector, is relying on technology to make itself more efficient. The Department is depending more and more on high-performance computers linked together in a vast collection of networks, many of which are themselves connected to the worldwide Internet. Hackers have been exploiting security weaknesses of systems connected to the Internet for years, they have more tools and techniques than ever before, and the number of attacks is growing every day. These attacks, coupled with the rapid growth and reliance on interconnected computers, have turned cyberspace into a veritable electronic frontier. The need to secure information systems has never been greater, but the task is complex and often difficult to understand. Information systems security is complicated not only by rapid growth in computer use and computer crime, but also by the complexity of computer networks. Most large organizations today like Defense have a conglomeration of mainframes, PCs, routers, servers, software, applications, and external connections. In addition, since absolute protection is not feasible, developing effective information systems security involves an often complicated set of trade-offs. Organizations have to consider the (1) type and sensitivity of the information to be protected, (2) vulnerabilities of the computers and networks, (3) various threats, including hackers, thieves, disgruntled employees, competitors, and in Defense’s case, foreign adversaries and spies, (4) countermeasures available to combat the problem, and (5) costs. In managing security risks, organizations must decide how great the risk is to their systems and information, what they are going to do to defend themselves, and what risks they are willing to accept. In most cases, a prudent approach involves selecting an appropriate level of protection and then ensuring that any security breaches that do occur can be effectively detected and countered. This generally means that controls be established in a number of areas, including, but not limited to: a comprehensive security program with top management commitment, sufficient resources, and clearly assigned roles and responsibilities for those responsible for the program’s implementation; clear, consistent, and up-to-date information security policies and vulnerability assessments to identify security weaknesses; awareness training to ensure that computer users understand the security risks associated with networked computers; assurance that systems administrators and information security officials have sufficient time and training to do their jobs properly; cost-effective use of technical and automated security solutions; and a robust incident response capability to detect and react to attacks and to aggressively track and prosecute attackers. The Department of Defense’s computer systems are being attacked every day. Although Defense does not know exactly how often hackers try to break into its computers, the Defense Information Systems Agency (DISA) estimates that as many as 250,000 attacks may have occurred last year. According to DISA, the number of attacks has been increasing each year for the past few years, and that trend is expected to continue. Equally worrisome are DISA’s internal test results; in assessing vulnerabilities, DISA attacks and successfully penetrates Defense systems 65 percent of the time. Not all hacker attacks result in actual intrusions into computer systems; some are attempts to obtain information on systems in preparation for future attacks, while others are made by the curious or those who wish to challenge the Department’s computer defenses. For example, Air Force officials at Wright-Patterson Air Force Base told us that, on average, they receive 3,000 to 4,000 attempts to access information each month from countries all around the world. into sensitive Defense systems. They have “crashed” entire systems and networks, denying computer service to authorized users and preventing Defense personnel from performing their duties. These are the attacks that warrant the most concern and highlight the need for greater information systems security at Defense. To further demonstrate the seriousness of some these attacks, I would like to briefly discuss the 1994 hacker attacks the Subcommittee asked us to specifically examine on the Air Force’s Rome Laboratory in Rome, New York. This incident demonstrates how easy it is for hackers to gain access to our nation’s most important and advanced research. Rome Laboratory is the Air Force’s premier command and control research facility—it works on very sensitive research projects such as artificial intelligence and radar guidance. In March and April 1994, a British hacker known as “Datastream Cowboy” and another hacker called “Kuji” (hackers commonly use nicknames or “handles” to conceal their real identities) attacked Rome Laboratory’s computer systems over 150 times. To make tracing their attacks more difficult, the hackers weaved their way through international phone switches to a computer modem in Manhattan. The two hackers used fairly common hacker techniques, including loading “Trojan horses” and “sniffer” programs, to break into the lab’s systems. Trojan horses are programs that when called by authorized users perform useful functions, but that also perform unauthorized functions, often usurping the privileges of the user. They may also add “backdoors” into a system which hackers can exploit. Sniffer programs surreptitiously collect information passing through networks, including user identifications and passwords. The hackers took control of the lab’s network, ultimately taking all 33 subnetworks off-line for several days. The attacks were initially suspected by a systems administrator at the lab who noticed an unauthorized file on her system. After determining that their systems were under attack, Rome Laboratory officials notified the Air Force Information Warfare Center and the Air Force Office of Special Investigations. Working together, these Air Force officials regained control of the lab’s network and systems. They also monitored the hackers by establishing an “electronic fishbowl” in which they limited the intruders’ access to one isolated subnetwork. tactics, such as where the enemy is located and what targets are to be attacked. The hackers also launched other attacks from the lab’s computer systems, gaining access to systems at NASA’s Goddard Space Flight Center, Wright-Patterson Air Force Base, and Defense contractors around the country. Datastream Cowboy was caught in Great Britain by Scotland Yard authorities, due in large part to the Air Force’s monitoring and investigative efforts. Legal proceedings are still pending against the hacker for illegally using and stealing British telephone service; no charges have been brought against him for breaking into U.S. military computer systems. Kuji was never caught. Consequently, no one knows what happened to the data stolen from Rome Lab. In general, Defense does not assess the damage from the computer attacks because it can be expensive, time-consuming and technically difficult. But in the Rome case, Air Force Information Warfare Center staff estimated that the attacks on the Rome Lab cost the government over half a million dollars. This included costs for time spent to take the lab’s systems off the networks, verify the integrity of the systems, install security “patches,” and restore computer service. It also included costs for the Office of Special Investigations and Warfare Center personnel deployed to the lab. But the estimate did not include the value of the research data that was compromised by the hackers. Information in general is very difficult to value and appraise. In addition, the value of sensitive Defense data may be very different to an adversary than to the military, and may vary a great deal, depending on the adversary. Rome Lab officials told us, however, that if their air tasking order research project had been damaged beyond repair, it would have cost about $4 million and 3 years to reconstruct it. In addition, the Air Force could not determine whether any of the attacks were a threat to national security. It is quite possible that at least one of the hackers may have been working for a foreign country interested in obtaining military research data or learning what the Air Force is working on. While this is only one example of the thousands of attacks Defense experiences each year, it demonstrates the damage caused and the costs incurred to verify sensitive data and patch systems. systems experts believe that computer attacks are capable of disrupting communications, stealing sensitive information, and threatening our ability to execute military operations. The National Security Agency and others have acknowledged that potential adversaries are attempting to obtain such sensitive information by hacking into military computer systems. Countries today do not have to be military superpowers with large standing armies, fleets of battleships, or squadrons of fighters to gain a competitive edge. Instead, all they really need to steal sensitive data or shut down military computers is a $2,000 computer and modem and a connection to the Internet. Defense officials and information systems security experts believe that over 120 foreign countries are developing information warfare techniques. These techniques allow our enemies to seize control of or harm sensitive Defense information systems or public networks which Defense relies upon for communications. Terrorists or other adversaries now have the ability to launch untraceable attacks from anywhere in the world. They could infect critical systems, including weapons and command and control systems, with sophisticated computer viruses, potentially causing them to malfunction. They could also prevent our military forces from communicating and disrupt our supply and logistics lines by attacking key Defense systems. Several studies document this looming problem. An October 1994 report entitled Information Architecture for the Battlefield prepared by the Defense Science Board underscores that a structured information systems attack could be prepared and exercised by a foreign country or terrorist group under the guise of unstructured hacker-like activity and, thus, could “cripple U.S. operational readiness and military effectiveness.” The Board added that “the threat . . . goes well beyond the Department. Every aspect of modern life is tied to a computer system at some point, and most of these systems are relatively unprotected.” Given our dependence on these systems, information warfare has the potential to be an inexpensive but highly effective tactic which many countries now plan to use as part of their overall security strategy. methods of attack. Defense has taken steps to strengthen its information systems security, but it has not established a comprehensive and effective security program that gives sufficient priority to protecting its information systems. Some elements of a good security program are in place. Most notably, Defense has implemented a formal information warfare program. DISA is in charge of the program and has developed and begun implementing a plan for protecting against, detecting, and reacting to information systems attacks. DISA established its Global Defensive Information Warfare Control Center and its Automated Systems Security Incident Support Team (ASSIST) in Arlington, Virginia. Both the center and ASSIST provide centrally coordinated, around-the-clock response to attacks and assistance to the entire Department. Each of the military services has established computer emergency response capabilities, as well. The Air Force is widely recognized as the leader among the services for having developed considerable experience and technical resources to defend its information systems. However, many of Defense’s policies relating to computer systems attacks are outdated and inconsistent. They do not set any standards or require actions for what we and many others believe are important security activities, such as periodic vulnerability assessments, internal reporting of attacks, correction of known vulnerabilities, and damage assessments. In addition, many of the Department’s system and network administrators are not adequately trained and do not have enough time to do their jobs properly. Computer users throughout the Department are often unaware of fundamental security practices, such as using sound passwords and protecting them. Further, Defense’s efforts to develop automated programs and use other technology to help counter information systems attacks need to be much more aggressive and implemented on a departmentwide basis, rather than in the few current locations. administrators receive enough time and training to do their jobs properly. Further, we recommend that Defense assess its incident response capability to determine its sufficiency in light of the growing threat, and implement more proactive and aggressive measures to detect systems attacks. The fact that these important elements are missing indicates that Defense has not adequately prioritized the need to protect its information resources. Top management at Defense needs to ensure that sufficient resources are devoted to information security and that corrective measures are successfully implemented. We have testified and reported on information systems weaknesses for several years now. In November 1991, I testified before the Subcommittee on Government Information and Regulation on a group of Dutch hackers breaking into Defense systems. Some of the issues and problems we discussed here today existed then; some have worsened, and new challenges arise daily as technology continues to advance. Without increased attention by Defense top management and continued oversight by the Congress, security weaknesses will continue. Hackers and our adversaries will keep compromising sensitive Defense systems. That completes my testimony. I’ll be happy to answer any questions you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed information security procedures at the Department of Defense (DOD). GAO noted that: (1) as many as 250,000 DOD computer systems were attacked in 1995; (2) hackers successfully penetrate DOD computer systems 65 percent of the time; (3) hackers attack DOD computer systems to steal and destroy sensitive data and install reentry devices; (4) these attacks cost the government over half a million dollars, including the cost of disconnecting the system, verifying the system's integrity, installing security patches, and restoring computer services; (5) hackers are capable of disrupting communications and threatening U.S. military operations; (6) DOD faces a huge challenge in protecting its computer systems due to the size of its information infrastructure, increasing amounts of sensitive data, Internet use, and skilled hackers; (7) DOD has established a Global Information Warfare Control Center and a Automated Systems Security Incident Support Team to provide around-the-clock service and respond to computer attacks; (8) many DOD policies pertaining to computer security are outdated and inconsistent and DOD system and network administrator's are inadequately trained to perform their jobs; and (9) DOD needs to be more aggressive in developing an automated program that responds to computer attacks. |
In regulating dual-use exports, the Commerce Department’s BIS faces the challenge of weighing various U.S. interests, which can be divergent or even competing, so U.S. companies can compete globally while minimizing the risk of controlled dual-use items falling into the wrong hands. Under the authority granted in the Export Administration Act (EAA), BIS administers the EAR that require exporters to either obtain a license from BIS or determine government authorization is not needed before exporting controlled items. Even when a license is not required, exporters are required to adhere to the provisions of the EAR when exporting controlled dual-use items. Whether an export license is required depends on multiple factors including the country of ultimate destination, individual parties involved in the export, parties’ involvement in proliferation activities, and planned end use of the item. Dual-use items specified in the EAR’s Commerce Control List are controlled for a variety of reasons, including restricting exports that could significantly enhance a country’s military potential, preventing exports to countries that sponsor terrorism, and limiting the proliferation of chemical, biological, and nuclear weapons and their delivery systems. The U.S. government controls many of these items under its commitments to multilateral export control regimes, which are voluntary agreements among supplier countries that seek to restrict trade in sensitive technologies to peaceful purposes. For those exports requiring a license, Executive Order 12981 governs the dual-use license application review process and establishes time frames for each step in the review process (see fig. 1). One of the first steps in the license application review process is the screening of parties on the application, such as the planned exporter or end user, against BIS’s internal watchlist to identify ineligible parties or parties that warrant closer scrutiny. Neither the EAA nor the EAR provide specific criteria as to which parties are to be included on the watchlist. However, under the EAR, BIS may deny export privileges to persons convicted of export violations, and the watchlist serves as a mechanism for identifying parties that have been denied exporting privileges. This screening process can also serve as a tool for identifying proposed end users sanctioned for terrorist activities and, therefore, ineligible to receive certain dual-use items. BIS has the discretion to add other parties to the watchlist. A match between the watchlist and a party on an application does not necessarily mean that the application will be denied, but it can trigger additional scrutiny by BIS officials, including BIS enforcement officials, during the license application review process. While BIS is responsible for administering the dual-use export control system and licensing dual-use exports, other federal agencies play active roles. As provided for under Executive Order 12981, the Departments of Defense, Energy, and State have the authority to review any export license applications submitted to BIS. These departments specify through delegations of authority to BIS the categories of applications that they want to review based, for example, on the item to be exported. License applications can also be referred to the Central Intelligence Agency (CIA) for review. After reviewing an application, the agencies are to provide the BIS licensing officer with a recommendation to approve or deny the application. In addition to reviewing license applications, the Defense, Energy, and State Departments are also involved in the regulatory process. Before changes are made to the EAR and the Commerce Control List, such as the addition of an item to the list, proposals are reviewed through an interagency review process. BIS is responsible for issuing the regulatory changes related to dual-use exports. For fiscal year 2005, BIS had a budget of $67.5 million, of which $33.9 million was for the administration of the export control system. Of the 414 positions at BIS in fiscal year 2005, 48 were licensing officers. These officers are responsible for developing the Commerce Department position as to whether an application should be approved and responding to exporter requests for commodity classifications as well as performing other duties related to administering the dual-use export control system. BIS has not systematically evaluated the overall effectiveness and efficiency of the system to determine whether its stated goal of protecting U.S. national security and economic interests is being achieved. Specifically, it has not comprehensively analyzed key data on actual dual- use exports, including unlicensed exports that represent the majority of exports subject to its controls. Further, contrary to what is called for under government management standards, BIS has not established performance measures to assess how effectively the system is protecting U.S. interests in the existing security and economic environment. While BIS has established some measures related to the system’s efficiency, those measures focus on narrow aspects of the licensing process. BIS officials also rely on intelligence reports and meetings with industry officials to provide insight into how the system is operating. After the events of September 2001, BIS conducted an ad hoc review of the system to determine if changes were needed. According to BIS officials, no fundamental changes to the system were needed, but they cited the review as the basis for some adjustments—primarily related to controls on chemical and biological agents. However, because BIS did not document its review, we could not assess the sufficiency of the review and the resulting changes. In managing the dual-use export control system, BIS has not conducted comprehensive analyses of available data on items under its control that have been exported. According to BIS officials, they recently began conducting limited analyses of export data to evaluate the potential effects of proposed regulatory changes on U.S. industry. While BIS is cognizant of dual-use exports authorized through the license application review process, it has not analyzed export data to determine the extent to which approved licenses resulted in actual exports. BIS also does not routinely analyze data on the items and destinations for unlicensed exports, which represent the majority of exports subject to BIS’s controls. BIS has not established measures to assess whether it is effectively achieving its goal of protecting national security and economic interests. Under the performance management framework established by the Government Performance and Results Act of 1993, federal agencies are to develop objective performance measures for assessing how well they are achieving their goals over time. These measures should focus on an agency’s outcomes as opposed to its processes. BIS’s lack of effectiveness measures was noted in a 2005 review by the Office of Management and Budget (OMB). In response to OMB’s review, BIS indicated plans for developing measures to assess the system’s effects on national security and economic interests in consultation with the other agencies involved in the export control system. BIS officials informed us that their attempt to devise effectiveness measures did not succeed due to a lack of cooperation and that they opted not to independently pursue the development of effectiveness measures. Without measures of effectiveness to assess it performance, BIS relies on measures related to the efficiency of the dual-use export control system. These efficiency-related measures generally focus on the first steps in the license application review process—how long it takes to review a license application internally and refer an application to another agency. Over the last 3 fiscal years, BIS has reported meeting its licensing-related time frames. However, BIS does not have efficiency-related measures for other steps in the license application review process, such as how quickly a license should be issued or denied once other agencies provide their input, or for the review process as a whole. BIS also does not evaluate the efficiency of other aspects of the system. Most notably, it does not measure whether it is meeting the regulatory time frame for the processing of commodity classification requests, of which there were 5,370 in fiscal year 2005 or about 24 percent of licensing officers’ workload (see app. I for additional information on BIS’s processing times). BIS officials acknowledged that they have not systematically evaluated the dual-use export control system. Instead, BIS officials informed us that they regularly review intelligence reports and meet with industry officials to gauge how well the system is working. A senior BIS official stated there are no anecdotal indications that the system is not effective. The official added that “it stands to reason” that BIS’s controls have limited various parties’ access to U.S. dual-use technologies but that it is difficult to determine how controls are affecting U.S. industry. Also, as evidence of how the system is operating, BIS officials referred us to BIS’s annual report on its foreign policy-based controls. This report summarizes various regulatory changes from the previous year and what the newly imposed controls were intended to achieve. However, this report does not contain an assessment of the impact these controls have had on U.S. interests. To address its lack of evaluations, BIS officials informed us that they are in the process of establishing an Office of Technology Evaluation. BIS is hiring analysts to evaluate topics including how dual-use items should be controlled and how export controls have affected industry. Absent systematic evaluations, BIS conducted an ad hoc review after the September 2001 attacks to determine what changes, if any, needed to be made to the system in light of the new security environment. However, according to BIS officials, they did not produce a report or other documentation regarding their review. Therefore, we could not assess the validity or sufficiency of BIS’s review and the resulting changes. BIS officials told us they determined that, other than some adjustments to its controls, no fundamental changes to the system were needed because they already had controls and procedures in place to deny terrorists access to dual-use technologies. Of the hundreds of regulatory changes made since September 2001, BIS officials identified the following specific changes as stemming from their ad hoc review establishing a worldwide licensing requirement for exports of changing the licensing requirement for biological agent fermenters from fermenters larger than 100 liters to those larger than 20 liters; controlling components that can be used in the manufacture of chemical agents; including additional precursors for the development of chemical agents on the Commerce Control List; revising licensing requirements to further restrict U.S. persons from designing, developing, producing, stockpiling, or using chemical or biological weapons; requiring licenses for exports of equipment related to the production of chemical or biological agents to countries that are not members of the Australia Group; imposing controls on exports of unmanned aerial vehicles capable of dispersing more than 20 liters of chemical or biological agents; and adding amorphous silicon plane arrays, which can be used in night vision or thermal imaging equipment, to the Commerce Control List. According to BIS officials, their review did not result in changes to the license application review process after the events of September 2001. However, decisions by other agencies—namely the Energy Department and the CIA—have resulted in BIS referring more license applications to them. Specifically, in response to Energy’s request, BIS began referring applications related to missile technologies and chemical or biological agents, in addition to the nuclear-related applications Energy was already reviewing. Similarly, based on discussions between BIS and the CIA, the decision was made to refer more applications to the CIA for review to determine whether foreign parties of concern may be involved in the proposed export (see app. I for information on BIS referral rates). Additionally, in response to the changing security environment after September 2001, BIS reprioritized its enforcement activities. Specifically, BIS enforcement officials are to give highest priority to dual-use export control violations involving the proliferation of weapons of mass destruction, terrorist organizations, and exports for unauthorized military or government uses. Further, senior BIS officials noted that they have made regulatory changes to reflect the dynamic geopolitical environment, such as changing licensing requirements for exports to India, Iraq, Libya, and Syria. BIS’s watchlist is intended to facilitate the identification of license applications involving individuals and companies representing an export control concern. However, BIS’s watchlist is incomplete, as numerous export control violators and terrorists are not included on the list. Further, BIS’s process for screening applications does not ensure that all parties on all applications are screened against the watchlist. As a result, the watchlist’s utility in the license application review process is undermined, which increases the risk of dual-use items falling into the wrong hands. BIS’s watchlist does not include certain companies, organizations, and individuals that are known entities of export control concern and, therefore, warrant inclusion on the watchlist. Based on our comparison of the watchlist to publicly available U.S. government documents, including ones available through BIS’s Web site, we identified 147 parties that had either violated U.S. export control requirements, been determined to be suspicious end users, or committed acts of terror but were not on BIS’s watchlist. BIS officials confirmed that, at the time of our review, the parties we identified were not on BIS’s watchlist. Specifically, we identified 5 export control violators that have been denied dual-use export 60 companies and individuals that had committed export control violations and were, therefore, barred by the State Department from being involved in the export of defense items; 52 additional companies and individuals that have been investigated, charged, and, in most cases, convicted of export control violations; 2 overseas companies whose legitimacy as end users could not be established by BIS; and 28 organizations identified by the State Department as committing acts of terror. The above individuals and companies we identified as not being on the BIS watchlist include those that have exported or attempted to export weapons to terrorist organizations, night vision technologies to embargoed countries, and materials that can be used in biological and missile programs. The terrorist organizations include one that has staged attacks against U.S. and coalition forces in Afghanistan and another that has attacked and abducted large numbers of civilians, including children. BIS’s standard for including a party on its watchlist is that the party represents an export control concern. BIS does not have an official definition or explanation as to what constitutes an export control concern. As a result, the decision as to whether a party should be added to the watchlist is left to the judgment of the BIS personnel responsible for maintaining the watchlist. The only specific guidance BIS provides is that parties under investigation by BIS enforcement officials must be added to the watchlist. BIS officials told us that the reasons a company, organization, or individual should be added to the watchlist include previous violations of U.S. export control regulations, inability to determine a party’s legitimacy, possible support of international terrorism, and possible involvement with missile programs of concern. The 147 parties we identified fall within these categories. In addition, BIS officials do not regularly review the watchlist to ensure its completeness. BIS officials said they do not conduct periodic checks as to whether particular parties have been added to the list. They also do not compare the BIS watchlist to other federal agencies’ lists or databases used for similar purposes to determine whether the BIS watchlist is missing pertinent parties. BIS officials offered several explanations for why the 147 parties were not on the watchlist. First, they acknowledged it was an oversight on their part not to include several of the parties on the watchlist. For example, at least two parties were not added to the watchlist because the BIS personnel involved thought they had been added by someone else. Second, for some of the parties, BIS did not receive information from another agency about export control-related investigations. However, these parties could have been identified through publicly available reports. Third, BIS relies on limited sources to identify parties involved in terrorist activities. The officials explained that their primary source for identifying terrorist organizations is the Treasury Department’s public listing of designated terrorists. While Treasury maintains a list of terrorists, its list is not exhaustive and therefore, does not include all known terrorist organizations. Finally, BIS officials noted that many of the parties we identified were individuals and that they do not typically add individuals to the watchlist because applications generally contain names of companies. However, we found numerous individuals included on the watchlist and individuals can and do appear on license applications. BIS’s process for screening applications does not ensure that all parties are screened against the watchlist. To screen parties on applications against the watchlist, BIS relies on a computerized process. The computer system recognizes parties that are identified in one of five specified fields and automatically screens the parties identified in those fields against the watchlist. If there are multiple parties, BIS’s regulations direct the applicant to list the additional parties in the “Additional Information” field. However, the computer system does not recognize the parties listed in that field, which means the parties are not automatically screened against the watchlist. While BIS officials told us that they may identify applications involving multiple parties and manually screen them against the watchlist, they do not have a systematic means of identifying applications involving parties listed in the “Additional Information” field. As a result, BIS cannot ensure that all parties on all applications have been screened. Based on our review of licensing data for the past 8 years, we identified at least 1,187 applications involving multiple parties that would not have been automatically screened. BIS officials informed us that they are aware of this limitation, but have not conducted reviews to determine the number of applications affected. According to BIS officials, since most applications are reviewed by other agencies, the risk of not screening all parties is lessened. However, a senior BIS official acknowledged that by not screening all applications against the BIS watchlist, applications involving parties that are the subject of BIS enforcement investigations would not be identified as that information only resides on the BIS watchlist. Defense and State officials, to whom most license applications are referred, stated that they do not maintain watchlists for the screening of dual-use export license applications and expect BIS to have already screened all parties before referring applications to them. BIS officials informed us of their plans to develop a new computerized screening system to ensure that all parties on applications are screened against the watchlist. However, the new system will not be operational for several years. In the years since the September 2001 terror attacks, GAO has issued a number of reports identifying weaknesses in the dual-use export control system. The weaknesses identified in many of the prior reports relate to ensuring that export controls on sensitive items protect U.S. interests and are consistent with U.S. law. Some of our recommendations to correct those weaknesses remain unimplemented (see app. II for more detailed information on these reports and the status of recommendations). Among the weaknesses identified in prior GAO reports is the lack of clarity as to which items are controlled and whether they are controlled by the Commerce Department or the State Department. A lack of clarity as to whether an item is Commerce-controlled or State-controlled increases the risk that defense-related items will be improperly exported and U.S. interests will be harmed as a result. In most cases, State’s controls over arms exports are more restrictive than Commerce’s controls over dual-use items. For example, a State-issued license is generally required for arms exports, whereas many dual-use items do not require licenses for export to most destinations. Further, most arms exports to China are prohibited, while dual-use items may be exported to China. In 2002, we reported that BIS had improperly informed exporters through the commodity classification process that their items were subject to Commerce’s export control requirements, when in fact the items were subject to State’s requirements. BIS made improper determinations because it rarely obtained input from the Departments of State or Defense during the commodity classification process on which department had jurisdiction over the items in question. We recommended that the Commerce Department, together with the Departments of State and Defense, develop agreed-upon criteria for determining which classification requests should be referred to the other departments, which would minimize the risk of improper determinations. However, BIS has not implemented our recommendation and continues to refer only a few commodity classifications to the Departments of State and Defense. In fiscal year 2005, BIS processed 5,370 commodity classification requests and referred only 10 to State and Defense. Additionally, in 2001, we reported that export control jurisdiction between the Departments of State and Commerce had not been clearly established for almost 25 percent of the items the U.S. government has agreed to control as part of its commitments to the multilateral Missile Technology Control Regime. The two departments have yet to take action to clarify which department has jurisdiction over these sensitive missile technology items. As a result, the U.S. government has left the determination of jurisdiction to the exporter, who by default can then determine which national policy interests are to be considered and acted upon when defense-related items are exported. BIS has taken actions to address other weaknesses identified in GAO reports. For example, in response to a 2004 GAO report, BIS expanded its licensing requirements for the export of missile technology items to address missile proliferation by nonstate actors. Similarly, BIS implemented GAO’s recommendation to require exporters to inform end users in writing of any conditions placed on licenses to help ensure that the end users abide by those restrictions. Exports of dual-use items are important to a strong U.S. economy, but in the wrong hands, they could pose a threat to U.S. security and foreign policy interests. However, BIS has not demonstrated whether the dual-use export control system is achieving its goal of protecting national security and economic interests in the post-September 2001 environment. Without systematic evaluations, BIS cannot readily identify weaknesses in the system and implement corrective measures that allow U.S. companies to compete in the global marketplace while minimizing the risk to other U.S. interests. Further, the absence of known parties of concern on the BIS watchlist and limitations in the screening process create vulnerabilities and are illustrative of what can happen when there is not an emphasis on evaluating how well a system is operating and taking corrective action to address known deficiencies. Also, the weaknesses and associated risks identified in prior GAO reports will persist until the remaining recommendations are implemented. Until corrective actions are taken, the United States will continue to rely on BIS’s management of the dual-use export control system with known vulnerabilities and little assurance that U.S. interests are being protected. To ensure that the dual-use export control system is effective as well as efficient in protecting U.S. interests, we recommend that the Secretary of Commerce direct the Under Secretary for Industry and Security to take the following four actions identify and obtain data needed to evaluate the system; review existing measures of efficiency to determine their appropriateness and develop measures that address commodity classifications; develop, in consultation with other agencies that participate in the system, measures of effectiveness that provide an objective basis for assessing whether progress is being made in achieving the goal of protecting U.S. interests; and implement a plan for conducting regular assessments of the dual- use export control system to identify weaknesses in the system and corrective actions. To ensure that BIS has a process that effectively identifies parties of concern during the export license application review process, we recommend that the Secretary of Commerce direct the Under Secretary for Industry and Security to take the following three actions develop criteria for determining which parties should be on the watchlist; implement regular reviews of the watchlist to help ensure its completeness; and establish interim measures for screening all parties until the planned upgrade of the computerized screening system eliminates current technical limitations. To mitigate the risks identified in prior GAO reports related to the dual-use export control system, we recommend that the Secretary of Commerce direct the Under Secretary for Industry and Security to report to Congress on the status of GAO recommendations, the reasons why recommendations have not been implemented, and what other actions, if any, are being taken to address the identified weaknesses. We provided a draft of this report to the Departments of Commerce, Defense, and State. In its comments on the draft, the Commerce Department did not respond to any of our recommendations and disagreed with our findings and characterizations of the U.S. dual-use export control system following the September 2001 terror attacks. The Departments of Defense and State had no comments on the draft report. The Energy Department declined the opportunity to review and comment on the draft report. In introducing its overall comments, the Commerce Department raises concerns regarding the report’s scope. Commerce states that we expanded the initial scope of our audit from narrowly looking at BIS’s response to the September 2001 terror attacks to the three issues we address in our report. In fact, the scope of our audit has remained the same. To examine BIS’s dual-use export control system and whether changes to the system were made, we focused on three specific issues related to how well the system is operating in the post-September 2001 environment. Based on our examination of these issues, we concluded that there are vulnerabilities in the dual-use export control system and that BIS can provide few assurances that the system is protecting U.S. interests in the current environment. After considering the Commerce Department’s extensive comments, our report’s findings, conclusions, and resulting recommendations remain unchanged. In commenting on our findings, the Commerce Department states that our report presumes BIS must develop a national security strategy to administer the dual-use export control system. Our report does not presume this as our recommendations address the need for BIS to develop performance measures and conduct systematic evaluations for determining the extent to which the system is meeting its stated goal of protecting both national security and economic interests. The Commerce Department further states that BIS represents the “gold standard” for its rigorous process of defining priorities, implementing plans, and measuring success. To support this statement, Commerce lists several actions that BIS has taken since September 2001 and cites BIS’s “Game Plan” as identifying BIS’s priorities and providing a basis for measuring BIS’s performance. However, BIS has not evaluated what effects these actions have had on U.S. interests. Also, the “Game Plan” provided to us at the end of our review did not contain performance measures for assessing how dual-use export controls affect national security or economic interests. Further, OMB determined in its 2005 Program Assessment Rating Tool that BIS lacked measures related to its fundamental purpose. Absent performance measures and systematic evaluations, it is unclear what the basis was for the various actions taken by BIS, what the impact of these actions has been on national security and economic interests, whether these actions are sufficient to protect U.S. interests in the current environment, or how BIS represents the gold standard. The Commerce Department also comments that our report is misleading and does not provide sufficient context for our findings related to BIS’s watchlist. According to Commerce, the 147 parties we identified as not being on the list should be placed in the context of the approximately 50,000 names that are on BIS’s watchlist, and no licenses were issued to the 147 parties. Commerce’s comment does not address our basic point. It was not our intent to identify every party that should be on BIS’s watchlist. Nor did we seek to determine whether licenses were issued to parties not on the watchlist, in part, because BIS’s regulations permit the approval of license applications involving parties on the watchlist. Instead, the point of our finding and our related recommendations is that BIS does not have mechanisms for ensuring a robust watchlist and screening process. To provide additional context, we adjusted the text to reflect the number of names on the watchlist. The Commerce Department also notes that the watchlist is only one check during the license application review process and that there are multiple layers and agencies involved—a fact we address in our report. According to Commerce, the built-in redundancies in the review process minimize the possibility of a party slipping through the cracks. We agree that having multiple layers of review can create an effective system of checks and balances, but only if each agency is fulfilling its responsibilities at each stage in the review. The other agencies involved in the process clearly expect BIS to have a robust watchlist screening process. BIS’s stated reliance on others to compensate for weaknesses in its watchlist creates gaps in the review process and, therefore, undermines the ability of the system to effectively protect U.S. interests. While the Commerce Department cites some measures BIS has taken recently to refine the watchlist, these measures do not address the weaknesses created by the lack of criteria and reviews of who should be on the watchlist or the technical limitations that result in some parties not being screened against the watchlist. Regarding its implementation of GAO’s prior recommendations, the Commerce Department states that BIS has met most of the recommendations and maintains that none of the outstanding recommendations puts BIS’s mission at risk. We disagree since BIS has not implemented recommendations that address the most basic aspects of the export control system. Specifically, BIS’s failure to implement recommendations that would provide for clear, transparent decisions about export control jurisdiction increases the risk that sensitive defense- related items will be improperly exported and that some exporters will be placed at a competitive disadvantage—undermining BIS’s goal of protecting national security and economic interests. The Commerce Department also provided technical comments, which we incorporated into our report as appropriate. Commerce’s comments are reprinted in appendix III, along with our supplemental responses. To assess BIS’s evaluations of the dual-use export control system’s efficiency and effectiveness after the events of September 2001, we compared BIS’s annual reports, performance plans, and budget submissions with performance management and internal control standards. These standards call for federal agencies to develop results- oriented goals, measure progress toward achieving those goals, and have procedures that provide reasonable assurances about the agency’s effectiveness and efficiency. We also spoke with senior BIS officials to identify evaluations they conducted of the system, particularly those conducted after the 2001 terror attacks, and discussed how those evaluations were conducted. To identify changes made to the system, we interviewed BIS officials and reviewed BIS regulatory notices issued since September 2001. Additionally, we interviewed officials from the CIA and the Departments of Defense, Energy, and State to determine changes to the system based on their participation in the dual-use licensing and regulatory processes. We also examined existing data on the system. Specifically, we analyzed data from BIS’s Export Control Automated Support System on applications and commodity classification requests closed between fiscal years 1998 and 2005. To assess data reliability, we performed electronic testing of relevant data elements, interviewed knowledgeable agency officials, and reviewed system documentation. We determined the data were sufficiently reliable for the purposes of our review. In examining the BIS watchlist, we reviewed BIS’s internal guidance for adding parties to the watchlist and discussed with BIS officials the various sources and reasons they use to add parties to the watchlist. Using the reasons they identified, we compared BIS’s watchlist, dated January 2006, to documents publicly available through U.S. government Web sites to assess the list’s completeness. These documents included BIS’s Denied Persons List, Unverified List, and Major Cases List; the State Department’s Debarred Parties List and Patterns of Global Terrorism report; and the Homeland Security Department’s fact sheet on arms and strategic technologies investigations. We confirmed with BIS officials that the parties we identified were not on the watchlist and discussed reasons they were excluded. We also discussed BIS’s process for screening applications with BIS officials and reviewed BIS’s internal guidance. To determine the status of GAO’s prior recommendations to correct weaknesses in the system, we identified reports issued between fiscal years 2001 and 2005 regarding the dual-use export control system and their recommendations. We reviewed BIS’s regulatory notices to determine whether BIS made regulatory changes in response to GAO’s recommendations. We also followed up on the status of recommendations through interviews with Commerce, Defense, and State officials and reviews of supporting documentation they provided. We requested data for fiscal years 2004 and 2005 on actual exports of dual- use items from the Bureau of the Census. As discussed with your staff, we requested the data in October 2005 and did not receive the data in time for inclusion in this report after multiple attempts to obtain the data. The delays from Census prevented us from reporting on actual dual-use exports as planned. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees as well as the Secretaries of Commerce, Defense, Energy, and State; the Director, Central Intelligence Agency; the Director, Office of Management and Budget; and the Assistant to the President for National Security Affairs. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The number of dual-use export license applications processed by the Department of Commerce’s Bureau of Industry and Security (BIS) has increased over the last several years. These applications were generally for the export of items in the following categories: materials, chemicals, microorganisms, and toxins; nuclear materials, facilities and equipment and miscellaneous items; telecommunications and information security; and other items subject to BIS’s controls but not specified on the Commerce Control List. As shown in figure 2, from fiscal years 1998 through 2005, the number of applications processed increased by over 50 percent. Additionally, BIS has been referring a larger percentage of applications to other agencies for their review. From fiscal year 1998 to 2005, the total percentage of applications referred to other agencies increased from about 85 percent to about 92 percent. As shown in figure 3, the greatest increases were in the percent of applications referred to the Department of Energy and the Central Intelligence Agency (CIA). After the license application review process is completed, BIS can approve an application, return it without action, or reject it. The majority of applications processed since fiscal year 1998 have been approved, as shown in figure 4. Although the number of applications processed by BIS increased over the last several years, the overall median processing times have remained relatively stable and consistent with time frames established by executive order, as shown in figure 5. As shown in table 1, there have been changes over the years in the top countries of destination for approved and rejected license applications. However, applications for dual-use exports to China have consistently represented a significant portion of BIS’s licensing workload. As shown in figure 6, referring applications to other agencies increases the time it takes to process license applications. Between fiscal years 1998 and 2005, referred license applications took about 24 more days to process than those applications that were processed solely by BIS. BIS’s workload related to commodity classifications has also increased in recent years. As shown in figure 7, the number of commodity classifications almost doubled from fiscal year 1998 to 2005. BIS continues to exceed the 14-day time frame established in the Export Administration Regulations for processing commodity classifications, as shown in figure 8. Appendix II: Prior GAO Reports on the Dual- Use Export Control System and the Status of Recommendations (Fiscal Years 2001-2004) Background: Exports of high performance computers exceeding a defined performance threshold require an export license from the Commerce Department. As technological advances in high performance computing occur, it may become necessary to explore other options to maintain the U.S. lead in defense-related technology. As a step in this direction, the National Defense Authorization Act for Fiscal Year 1998 required the Secretary of Defense to assess the cumulative effect of U.S.-granted licenses for exports of computing technologies to countries and entities of concern. It also required information on measures that may be necessary to counter the use of such technologies by entities of concern. in consultation with other relevant agencies, convene a panel of experts to comprehensively assess and report to Congress on ways of addressing the shortcomings of computer export controls. The Commerce Department has implemented our recommendation. Main issues: The current system for controlling exports of high performance computers is ineffective because it focuses on the performance level of individual computers and does not address the linking or “clustering” of many lower performance computers that can collectively perform at higher levels than current export controls allow. However, the act does not require an assessment of the cumulative effect of exports of unlicensed computers, such as those that can be clustered. countermeasures are necessary, if any, to respond to enhancements of the military or proliferation capabilities of countries of concern derived from both licensed and unlicensed high performance computing. The Defense Department has not implemented our recommendation. The current control system is also ineffective because it uses millions of theoretical operations per second as the measure to classify and control high performance computers meant for export. This measure is not a valid means for controlling computing capabilities. Export Controls: State and Commerce Department License Review Times Are Similar (June 1, 2001, GAO-01-528) Background: The U.S. defense industry and some U.S. and allied government officials have expressed concerns about the amount of time required to process export license applications. No recommendations. Not applicable. Main issues: In fiscal year 2000, State’s average review time for license applications was 46 days while Commerce’s average was 50 days. Variables identified as affecting application processing times include the commodity to be exported and the extent of interagency coordination. Both departments approved more than 80 percent of license applications during fiscal year 2000. Export Controls: Regulatory Change Needed to Comply with Missile Technology Licensing Requirements (May 31, 2001, GAO-01-530) Background: Concerned about missile proliferation, the United States and several major trading partners in 1987 created an international voluntary agreement, the Missile Technology Control Regime (MTCR), to control the spread of missiles and their related technologies. Congress passed the National Defense Authorization Act for Fiscal Year 1991 to fulfill the U.S. government’s MTCR commitments. This act amended the Export Administration Act of 1979, which regulates the export of dual-use items, by requiring a license for all exports of controlled dual-use missile technologies to all countries. The National Defense Authorization Act also amended the Arms Export Control Act, which regulates the export of military items, by providing the State Department the discretion to require licenses or provide licensing exemptions for missile technology exports. Our recommendations have not been implemented. However, the Commerce Department has a regulatory change pending that, once implemented, will require licenses for the export of dual-use missile technologies to Canada. Congress to specifically permit MTCR items to be exempted from licensing requirements. Main issues: The State Department’s regulations require licenses for the exports of missile technology items to all countries—including Canada, which is consistent with the National Defense Authorization Act. However, the Commerce Department’s export regulations are not consistent with the act as they do not require licenses for the export of controlled missile equipment and technology to Canada. if Commerce seeks a statutory change, revise the Export Administration Regulations to comply with the current statute until such time as a statutory change occurs. Export Controls: Clarification of Jurisdiction for Missile Technology Items Needed (Oct. 9, 2001, GAO-02-120) Background: The United States has committed to work with other countries through the MTCR to control the export of missile-related items. The regime is a voluntary agreement among member countries to limit missile proliferation and consists of common export policy guidelines and a list of items to be controlled. In 1990, Congress amended existing export control statutes to strengthen missile-related export controls consistent with U.S. commitments to the regime. Under the amended statutes, the Commerce Department is required to place regime items that are dual-use on its list of controlled items. All other regime items are to appear on the State Department’s list of controlled items. jointly review the listing of items included on the MTCR list, determine the appropriate jurisdiction for those items, and revise their respective export control lists to ensure that proposed exports of regime items are subject to the appropriate review process. The Departments of Commerce and State have not implemented our recommendations despite initially agreeing to do so. Main issues: The Departments of Commerce and State have not clearly determined which department has jurisdiction over almost 25 percent of the items that the U.S. government agreed to control as part of its regime commitments. The lack of clarity as to which department has jurisdiction over some regime items may lead an exporter to seek a Commerce license for a militarily sensitive item controlled by the State. Conversely, an exporter could seek a State license for a Commerce- controlled item. Either way, exporters are left to decide which department should review their exports of missile items and, by default, which policy interests are to be considered in the license review process. Export Controls: Issues to Consider in Authorizing a New Export Administration Act (Feb. 28, 2002, GAO-02-468T) Background: The U.S. government’s policy regarding exports of sensitive dual-use technologies seeks to balance economic, national security, and foreign policy interests. The Export Administration Act (EAA) of 1979, as amended, has been extended through executive orders and law. Under the act, the President has the authority to control and require licenses for the export of dual-use items, such as nuclear, chemical, biological, missile, or other technologies that may pose a national security or foreign policy concern. In 2002, there were two different bills before the 107th Congress—H.R. 2581 and S. 149—that would enact a new EAA. No recommendations. Not applicable. Main issues: A new EAA should take into consideration the increased globalization of markets and an increasing number of foreign competitors, rapid advances in technologies and products, a growing dependence by the U.S. military on commercially available dual-use items, and heightened threats from terrorism and the proliferation of weapons of mass destruction. Export Controls: Rapid Advances in China’s Semiconductor Industry Underscore Need for Fundamental U.S. Policy Review (April 19, 2002, GAO-02-620) Background: Semiconductor equipment and materials are critical components in everything from automobiles to weapons systems. The U.S. government controls the export of these dual-use items to sensitive destinations, such as China. Exports of semiconductor equipment and materials require a license from Commerce Department. Other departments, such as Defense and State, assist Commerce in reviewing license applications. The United States is a member of the multilateral Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. After initially disagreeing with our recommendations, the Commerce Department has cited our recommendations as the basis for increased resources so it can conduct the recommended analyses. Main issues: Since 1986, China has narrowed the gap between the U.S. and Chinese semiconductor manufacturing technology from approximately 7 years to 2 years or less. China’s success in acquiring manufacturing technology from abroad has improved its semiconductor manufacturing facilities for more capable weapons systems and advanced consumer electronics. The multilateral Wassenaar Arrangement has not affected China’s ability to obtain semiconductor manufacturing equipment because the United States is the only member of this voluntary arrangement that considers China’s acquisition of semiconductor manufacturing equipment a cause for concern. Additionally, U.S. government policies and practices to control the export of semiconductor technology to China are unclear and inconsistent, leading to uncertainty among U.S. industry officials about the rationale for some licensing decisions. Furthermore, U.S. agencies have not done the analyses, such as assessing foreign availability of this technology or the cumulative effects of such exports on U.S. national security interests, necessary to justify U.S. policies and practices. develop new export controls, if appropriate, or alternative means for protecting U.S. security interests; and these efforts to Congress and U.S. industry. Export Controls: More Thorough Analysis Needed to Justify Changes in High Performance Computer Controls (Aug. 2, 2002, GAO-02-892) Background: High performance computers that operate at or above a defined performance threshold, measured in millions of theoretical operations per second, require a Commerce license for export to particular destinations. The President has periodically changed, on the basis of technological advances, the threshold above which licenses are required. The National Defense Authorization Act of 1998 requires that the President report to Congress the justification for changing the control threshold. The report must, at a minimum, (1) address the extent to which high performance computers with capabilities between the established level and the newly proposed level of performance are available from foreign countries, (2) address all potential uses of military significance to which high performance computers between the established level and the newly proposed level could be applied, and (3) assess the impact of such uses on U.S. national security interests. No recommendations. Not applicable. Main issues: In January 2002, the President announced that the control threshold—above which computers exported to such countries as China, India, and Russia— would increase from 85,000 to 190,000 millions of theoretical operations per second. The report to Congress justifying the changes in control thresholds for high performance computers was issued in December 2001 and focused on the availability of such computers. However, the justification did not fully address the requirements of the National Defense Authorization Act of 1998. The December 2001 report did not address several key issues related to the decision to raise the threshold: (1) the unrestricted export of computers with performance capabilities between the old and new thresholds will allow countries of concern to obtain computers they have had difficulty constructing on their own, (2) the U.S. government is unable to monitor the end uses of many of the computers it exports, and (3) the multilateral process used to make earlier changes in high performance computer thresholds. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement (Sept. 6, 2002, GAO-02-972) Background: To work with controlled dual-use technologies in the United States, foreign nationals and the firms that employ them must comply with U.S. export control and visa regulations. U.S. firms may be required to obtain what is known as a deemed export license from the Commerce Department before transferring controlled technologies to foreign nationals in the United States. Commerce issues deemed export licenses after consulting with the Defense, Energy, and State Departments. In addition, foreign nationals who are employed by U.S. firms should have an appropriate visa classification, such as an H-1B specialized employment classification. H-1B visas to foreign nationals residing outside of the United States are issued by the State Department, while the Immigration and Naturalization Service approves requests from foreign nationals in the United States to change their immigration status to H-1B. Our recommendations have been implemented. use available Immigration and Naturalization Service data to identify foreign nationals potentially subject to deemed export licensing requirements. establish, with the Defense, Energy, and State Departments, a risk-based program to monitor compliance with deemed export license conditions. If the departments conclude that certain security conditions are impractical to enforce, they should jointly develop conditions or alternatives to ensure that deemed exports do not place U.S. national security interests at risk. Main Issues: In fiscal year 2001, Commerce approved 822 deemed export license applications and rejected 3. Most of the approved deemed export licenses allowed foreign nationals from countries of concern to work with advanced computer, electronic, or telecommunication and information security technologies in the United States. To better direct its efforts to detect possible unlicensed deemed exports, in fiscal year 2001 Commerce screened thousands of applications for H-1B and other types of visas submitted by foreign nationals overseas. From these applications, it developed 160 potential cases for follow-up by enforcement staff in the field. However, Commerce did not screen thousands of H-1B change-of-status applications submitted domestically to the Immigration and Naturalization Service for foreign nationals already in the United States. In addition, Commerce could not readily track the disposition of the 160 cases referred to field offices for follow-up because it lacks a system for doing so. Commerce attaches security conditions to almost all licenses to mitigate the risk of providing foreign nationals with controlled dual-use technologies. However, according to senior Commerce officials, their staff do not regularly visit firms to determine whether these conditions are being implemented because of competing priorities, resource constraints, and inherent difficulties in enforcing several conditions. Export Controls: Processes for Determining Proper Control of Defense-Related Items Need Improvement (Sept. 20, 2002, GAO-02-996) Background: Companies seeking to export defense-related items are responsible for determining whether those items are regulated by the Commerce Department or the State Department and what the applicable export requirements are. If in doubt about whether an item is Commerce or State- controlled or when requesting a change in jurisdiction, an exporter may request a commodity jurisdiction determination from State. State, which consults with Commerce and Defense, is the only department authorized to change export control jurisdiction. If an exporter knows an item is Commerce-controlled but is uncertain of the export requirements, the exporter can request a commodity classification from Commerce. Commerce may refer classification requests to State and Defense to confirm that an item is Commerce-controlled. guidance and develop criteria with concurrence from the State and Defense Departments for referring commodity classification requests to those departments. work with State to develop procedures for referring requests that are returned to companies because the items are controlled by State or because they require a commodity jurisdiction review. With a limited exception, our recommendations have not been implemented. In responding to our report, the State Department indicated it partially agreed with our recommendations, while the Departments of Commerce and Defense agreed to implement our recommendations. Main issues: The Commerce Department has improperly classified some State-controlled items as Commerce- controlled because it rarely obtains input from Defense and State before making commodity classification determinations. As a result, the U.S. government faces an increased risk that defense items will be exported without the proper level of government review and control to protect national interests. Also, Commerce has not adhered to regulatory time frames for processing classification requests. Commerce, Defense and State Departments have added staff to assist with their respective processes. In its implementation of the commodity jurisdiction process, the State Department has not adhered to established time frames, which may discourage companies from requesting jurisdiction determinations. State has also been unable to issue determinations for some items because of interagency disputes occurring outside the process. make jurisdiction recommendations and determinations within established time frames and reallocate them as appropriate. Nonproliferation: Strategy Needed to Strengthen Multilateral Export Control Regimes (Oct. 25, 2002, GAO-03-43) Background: Multilateral export control regimes are a key policy instrument in the overall U.S. strategy to combat the proliferation of weapons of mass destruction. They are consensus-based, voluntary arrangements of supplier countries that produce technologies useful in developing weapons of mass destruction or conventional weapons. The regimes aim to restrict trade in these technologies to prevent proliferation. The four principal regimes are the Australia Group, which controls chemical and biological weapons proliferation; the MTCR; the Nuclear Suppliers Group; and the Wassenaar Arrangement, which controls conventional weapons and dual-use items and technologies. All four regimes expect members to report denials of export licenses for controlled dual-use items, which provides members with more complete information for reviewing questionable export license applications. The United States is a member of all four regimes. The State Department has not implemented our recommendations. representative to the multilateral regimes, establish a strategy to strengthen these regimes. This strategy should include ways for regime members to implement regime changes to their export controls more consistently, and identify organizational changes that could help reform regime activities. Main issues: Weaknesses impede the ability of the multilateral export control regimes to achieve their nonproliferation goals. Regimes often lack even basic information that would allow them to assess whether their actions are having their intended results. The regimes cannot effectively limit or monitor efforts by countries of concern to acquire sensitive technology without more complete and timely reporting of licensing information and without information on when and how members adopt and implement agreed-upon export controls. For example, GAO confirmed that the U.S. government had not reported its denial of 27 export licenses between 1996 and 2002 for items controlled by the Australia Group. Several obstacles limit the options available to the U.S. government in strengthening the effectiveness of multilateral export control regimes. The requirement to achieve consensus in each regime allows even one member to block action in adopting needed reforms. Because the regimes are voluntary in nature, they cannot enforce members’ compliance with regime commitments. For example, Russia exported nuclear fuel to India in a clear violation of its commitments under the Nuclear Suppliers Group, threatening the viability of this regime. The regimes have adapted to changing threats in the past. Their continued ability to do so will determine whether they remain viable in curbing proliferation in the future. ensure that the United States reports all license application denials to regimes. establish criteria to assess the effectiveness of the regimes. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles (Jan. 23, 2004, GAO-04-175) Background: Cruise missiles and unmanned aerial vehicles (UAV) pose a growing threat to U.S. national security interests as accurate, inexpensive delivery systems for conventional, chemical, and biological weapons. Exports of cruise missiles and military UAVs by U.S. companies are licensed by the State Department while government-to- government sales are administered by the Defense Department. Exports of dual-use technologies related to cruise missiles and UAVs are licensed by the Commerce Department. assess and report to the Committee on Government Reform on the adequacy of the Export Administration Regulations’ catch-all provision to address missile proliferation by nonstate actors. This assessment should indicate ways the provision should be modified. The Commerce Department has addressed our recommendation by revising its licensing requirement for missile technology exports. Main issues: U.S. export control officials find it increasingly difficult to limit or track dual-use items with cruise missile or UAV-related capabilities that can be exported without a license. A gap in dual-use export control authority enables U.S. companies to export certain dual-use items to recipients that are not associated with missile projects or countries listed in the regulations, even if the exporter knows the items might be used to develop cruise missiles or UAVs. The gap results from current “catch-all” regulations that restrict the sale of unlisted dual-use items to certain national missile proliferation projects or countries of concern, but not to nonstate actors such as certain terrorist organizations or individuals. Catch-all controls authorize the government to require an export license for items that are not on control lists but are known or suspected of being intended for use in a missile or weapons of mass destruction program. Commerce, Defense and State Departments as a first step, each department complete a comprehensive assessment of cruise missile, UAV, and related dual-use technology transfers to determine whether U.S. exporters and foreign end users are complying with the conditions on the transfers. While the Commerce Department has taken some actions to address our recommendations, the others departments have not done so. The Departments of Commerce, Defense, and State have seldom used their end use monitoring programs to verify compliance with conditions placed on the use of cruise missile, UAV, or related technology exports. For example, Commerce conducted visits to assess the end use of items for about 1 percent of the 2,490 missile-related licenses issued between fiscal years 1998 and 2002. Thus, the U.S. government cannot be confident that recipients are effectively safeguarding equipment in ways that protect U.S. national security and nonproliferation interests. department conduct additional postshipment verification visits on a sample of cruise missile and UAV licenses. Export Controls: Post-Shipment Verification Provides Limited Assurance that Dual-Use Items Are Being Properly Used (Jan. 12, 2004, GAO-04-357) Background: The Commerce Department conducts post- shipment verification (PSV) checks to ensure that dual-use items arrive at their intended destination and are used for the purposes stated in the export license. To conduct PSV checks, Commerce personnel visit foreign companies to verify the use and location of exported items. PSVs serve as one of the primary means of checking whether end users are complying with conditions imposed by the license. Commerce placed conditions on nearly all approved licenses for exports to countries of concern for fiscal years 2000 to 2002. Our recommendations have been implemented. improve technical training for personnel conducting PSV checks to ensure they are able to verify compliance with license conditions. conducting PSV checks assess compliance with license conditions. Main issues: In fiscal years 2000 to 2002, the Commerce Department approved 7,680 licenses for dual-use exports to countries of concern, such as China, India, and Russia. However, we found that during this time Commerce completed PSV checks on only 428 of the dual-use licenses it approved for countries of concern. require that the exporter inform the end user in writing of the license conditions. We identified three key weaknesses in the PSV process that reduce its effectiveness. First, PSVs do not confirm compliance with license conditions because U.S. officials often lack the technical training needed to assess compliance and end users may not be aware of the license conditions by which they are to abide. Second, some countries of concern, most notably China, limit the U.S. government’s access to facilities where dual-use items are shipped, making it difficult to conduct a PSV. Third, PSV results have only a limited impact on future licensing decisions. Companies receiving an unfavorable PSV may receive greater scrutiny in future license applications, but licenses for dual-use exports to these companies can still be approved. In addition, according to Commerce officials, past PSV results play only a minor role in future enforcement actions. 5, §1211, 111 Stat. 1932-34 (1997). 1 nor S. 149 was enacted. 1. The scope of our review has remained unchanged. We examined BIS’s dual-use export control system and whether changes were made to the system by focusing on three specific issues related to how well the system is operating in the post-September 2001 environment. 2. Our report is not premised on a need for BIS to develop a national security strategy, which is outside of BIS’s mission. BIS’s stated goal is the protection of national security and economic interests. In its comments, BIS appears to define “national security interests” in terms of the administration’s National Security Strategy, but BIS has not developed performance measures to evaluate or determine whether the dual-use export control system is supporting and furthering that strategy. Commerce’s comments also do not address what effects the dual-use export control system has had on U.S. economic interests. 3. The eight specific measures cited in our report are not “samples” of steps taken by BIS. Rather, they represent all of the changes identified by BIS officials as a result of their ad hoc review to determine what changes, if any, should be made to the system after the September 2001 terror attacks. 4. Our report accurately depicts what BIS officials told us regarding the ad hoc review they conducted in the aftermath of the 2001 terror attacks. Given that BIS officials did not document their review, we can neither confirm what the review consisted of nor determine the sufficiency of this review and the resulting changes. 5. Our report acknowledges that BIS made adjustments to its enforcement efforts in response to the changing security environment. Also, GAO is currently conducting a separate review of export control enforcement efforts. 6. Our report identifies the specific changes BIS officials stated were the result of their post-September 2001 ad hoc review and acknowledges that BIS has reprioritized its enforcement efforts and taken other actions as a result of various geopolitical changes. However, without performance measures and systematic evaluations, BIS is not in a position to readily identify weaknesses in the dual-use export control system, implement corrective measures, and determine whether those measures are having the intended effects of protecting U.S. national security and economic interests. 7. Commerce’s characterization of BIS’s annual foreign policy report is misleading. BIS’s annual report summarizes export control changes and describes what those changes were intended to achieve. BIS’s report does not contain an assessment of the actual impact foreign policy- based controls have had on U.S. interests. 8. Our report acknowledges that there have been over 100 amendments to the EAR since September 2001. However, based on our review of those amendments, the specific basis for many of these revisions is not clear and given BIS’s lack of evaluations, the impact of these revisions is unknown. Also, it should be noted that many of the regulatory amendments made since September 2001 consisted of administrative changes and technical corrections as opposed to revisions of export requirements for dual-use items. 9. The quotes from senior BIS officials’ speeches do not address whether the dual-use export control system is protecting U.S. interests nor do they provide other evidence that BIS has developed performance measures or conducted systematic evaluations. While these speeches outline BIS’s mission and the role of export controls, the lack of performance measures and systematic evaluations precludes a determination as to whether that mission and role are being successfully fulfilled. It is also unclear how changing the bureau’s name is an example of a successful adaptation to the current environment. Further, the increased scrutiny of license applications was not the result of BIS’s actions as one of the quotes implies. As discussed in our report, increases in the referral of license applications resulted from decisions by other agencies involved in the application review process. 10. Absent any documentation to the contrary, particularly when BIS officials repeatedly acknowledged that BIS had not undertaken systematic evaluations, we stand by our finding that BIS has not systematically evaluated the overall effectiveness and efficiency of the dual-use export control system. Regarding BIS’s ad hoc post-September 2001 review, we could not assess the validity and sufficiency of the review and resulting changes due to the lack of documentation. 11. Commerce’s description of BIS’s Game Plan is misleading and inaccurate. First, BIS’s mission and priorities as summarized in the Game Plan are not consistent with the mission and goals stated in Commerce’s official performance management documents, such as the annual performance plan. The Game Plan may represent BIS’s thoughts for how to align activities and priorities in the future, but it does not depict what has been in place since the September 2001 terror attacks. Second, the Game Plan does not contain measures of effectiveness. When we discussed the Game Plan with BIS officials, they acknowledged that they had not developed measures for evaluating how well the dual-use export control system is protecting national security and economic interests. 12. We agree that the development of measures for determining the effectiveness of the dual-use export system would be difficult. However, BIS’s existing performance measures, which focus on processing times, fall far short of government management standards since they do not provide a basis for determining whether the system is protecting U.S. interests. 13. Our report presents BIS’s position that it was unable to obtain assistance from other agencies to develop performance measures for assessing the dual-use export control system’s effects on national security and economic interests. The two examples of performance measures provided in Commerce’s comments do not relate to BIS’s administration of the export controls system, which was the focus of our review, but rather to BIS’s export enforcement efforts and assistance to other countries. Also, it is not clear how these two measures would provide BIS with a basis for determining the security and economic impact of its controls on dual-use exports. Additionally, Commerce’s statement that BIS is assigning staff to develop a methodology for evaluating the system’s effectiveness indicates that BIS does not yet have a systematic evaluation process in place. 14. Our report discusses that, in the absence of systematic evaluations, BIS officials obtain information from industry to gauge how the dual-use export control system is operating. However, the collection of data from industry does not constitute a measure or evaluation of how the dual-use export control system is affecting U.S. economic interests. Also, BIS officials repeatedly informed us that they do not have measures for determining the impact of dual-use export controls on economic interests. 15. The Office of Management and Budget determined in its 2005 review that BIS lacked measures related to the fundamental purpose of the dual-use export controls system. Given this and our evaluation as well as BIS’s limited measures of efficiency and lack of comprehensive analyses as to which items under its control have actually been exported, BIS is not meeting government performance management standards and, therefore, does not represent the gold standard. 16. We examined the completeness of the watchlist and the thoroughness of BIS’s watchlist screening process and found omissions in the list and weaknesses in the process. Our intent was not to determine whether licenses were approved for parties not on the watchlist. As our report explains, a match between an application and the watchlist does not necessarily mean that the application will be denied but that the application will be more closely scrutinized during the license application review process. 17. Our report places BIS’s watchlist in the context of the larger license application review process. A process built on multiple layers and multiple agencies is only as strong as its weakest link. Other agencies that participate in the license application review process expect BIS to thoroughly screen all parties on all applications against the watchlist before referring applications to them. Given the omissions we identified in the watchlist and the weakness in the screening process, BIS’s watchlist is not serving its intended purpose of helping identify those license applications that warrant additional scrutiny. We identified many of the 147 parties not on the watchlist by using the lists cited in Commerce’s comments. While BIS expects exporters to check these publicly available lists, we found that BIS failed to include all of the publicly-listed parties on its watchlist. It is reasonable that BIS would focus its licensing and enforcement efforts on the “truly bad actors.” However, given that the watchlist is supposed to help BIS identify parties of export control concern, BIS’s ability to focus on “bad actors” is undermined by the omissions we identified in the watchlist. 18. The 147 parties we identified should not be regarded as an exhaustive list of every party of export control concern that should be on BIS’s watchlist. Our intent was not to identify all parties but rather to evaluate the process that BIS uses to determine which parties should be on the list. Therefore, the 147 parties represent examples that illustrate weaknesses in BIS’s management of the watchlist. However, to provide additional context, we revised the text to include the number of names on the BIS watchlist. 19. The measures listed in Commerce’s comments do not address the underlying weaknesses we identified or our corrective recommendations. 20. Our report accurately reflects that several, but not all, of GAO’s prior recommendations regarding the dual-use export control system have been implemented. BIS’s disagreement with the conclusions of GAO’s report on China’s semiconductor industry does not change the fact that BIS continues to cite that report and its recommendations as justification for requested increases in resources. However, BIS has not implemented the report’s recommendations. The continued failure to address GAO’s recommendations regarding the commodity classification process and export control jurisdiction places BIS’s mission of protecting national security and economic interests at risk. Improper decisions regarding jurisdiction and the lack of clear jurisdiction create the risk that defense-related items will be exported without the proper level of government review and control to protect national interests. These weaknesses can also result in companies seeking to export similar items under the different controls of the Departments of State and Commerce, which places some companies at a competitive disadvantage. 21. As discussed in our report’s scope and methodology, we reviewed BIS’s documents, such as its performance plans, that contain BIS’s official performance measures. None of these documents contains performance measures related to the processing of commodity classifications. During meetings with BIS officials, they did not identify additional measures for evaluating the system’s effectiveness. Also, Commerce’s comment is misleading, as our report does not cite BIS statistics on commodity classifications. Our report contains GAO’s analyses of BIS’s data on commodity classification processing times and shows that BIS has exceeded regulatory processing time frames. 22. We are not revising the graphic because it depicts what can occur in the license application review process under different circumstances. 23. Text revised to further clarify the CIA’s role in the license application review process. 24. The examples provided by Commerce are limited to BIS’s analyses of licensing data. However, BIS has not comprehensively analyzed data on actual exports, particularly on unlicensed exports that represent the majority of exports subject to BIS’s control. 25. Our report states that Executive Order 12981 provides time frames for the entire license application review process. However, none of BIS’s performance measures addresses the timeliness of the entire process. Also, BIS has not reported overall timeframes consistently in its annual reports. 26. Our draft report cited changes in BIS’s licensing policy for dual-use exports to Iraq as an illustrative example; however, we have revised our report to include the other countries listed in Commerce’s comments. 27. Despite Commerce’s comment regarding its sources, some of the 147 parties we identified as not being on the watchlist appear on publicly available documents from the State Department’s Directorate of Defense Trade Controls and the Homeland Security Department’s Immigration and Customs Enforcement. 28. We are not revising the text based on Commerce’s comment because our report accurately reflects how the Treasury Department characterizes the list it maintains on individuals and companies. 29. Despite Commerce’s comment that it adds individuals to its watchlist, we identified many individuals who were not on the list but should have been. 30. Our report explains that BIS has a regulatory change pending that once implemented will address this recommendation from 2001. 31. Commerce’s actions regarding production equipment for missile technology items do not resolve the lack of clear jurisdiction between State and Commerce as to which department controls the export of almost 25 percent of the missile technology items the U.S. government agreed to control as part of its commitments to the Missile Technology Control Regime. As a result, GAO’s recommendations regarding this matter remain unimplemented. 32. See comment 20. 33. The memorandum contained in Commerce’s comments does not address GAO’s recommendations that BIS develop criteria, with the concurrence of the State and Defense Departments, for the referral of commodity classification requests and develop procedures for referring other commodity classification requests to the State Department. As a result, GAO’s recommendations regarding this matter remain unimplemented. 34. We revised the report text to more clearly reflect BIS’s actions. In addition to the contact named above, Anne-Marie Lasowski, Assistant Director; Johana R. Ayers; Lily Chin; Arthur James, Jr.; Megan Masengale; Margaret B. McDavid; Bradley Terry; Karen Thornton; and Joseph Zamoyta made key contributions to this report. Defense Trade: Arms Export Control Vulnerabilities and Inefficiencies in the Post-9/11 Security Environment. GAO-05-468R. Washington, D.C.: April 7, 2005. Defense Trade: Arms Export Control System in the Post- 9/11Environment. GAO-05-234. Washington, D.C.: February 16, 2005. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-Use Items Are Being Properly Used. GAO-04-357. Washington, D.C.: January 12, 2004. Nonproliferation: Strategy Needed to Strengthen Multilateral Export Control Regimes. GAO-03-43. Washington, D.C.: October 25, 2002. Export Controls: Processes for Determining Proper Control of Defense- Related Items Need Improvement. GAO-02-996. Washington, D.C.: September 20, 2002. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement. GAO-02-972. Washington, D.C.: September 6, 2002. Export Controls: More Thorough Analysis Needed to Justify Changes in High Performance Computer Controls. GAO-02-892. Washington, D.C.: August 2, 2002. Export Controls: Rapid Advances in China’s Semiconductor Industry Underscore Need for Fundamental U.S. Policy Review. GAO-02-620. Washington, D.C.: April 19, 2002. Export Controls: Issues to Consider in Authorizing a New Export Administration Act. GAO-02-468T. Washington, D.C.: February 28, 2002. Export Controls: Clarification of Jurisdiction for Missile Technology Items Needed. GAO-02-120. Washington, D.C.: October 9, 2001. Export Controls: State and Commerce Department License Review Times Are Similar. GAO-01-528. Washington, D.C.: June 1, 2001. Export Controls: Regulatory Change Needed to Comply with Missile Technology Licensing Requirements. GAO-01-530. Washington, D.C.: May 31, 2001. Export Controls: Inadequate Justification for Relaxation of Computer Controls Demonstrates Need for Comprehensive Study. GAO-01-534T. Washington, D.C.: March 15, 2001. Export Controls: System for Controlling Exports of High Performance Computing Is Ineffective. GAO-01-10. Washington, D.C.: December 18, 2000. Export Controls: Statutory Reporting Requirements for Computers Not Fully Addressed. NSIAD-00-45. Washington, D.C.: November 5, 1999. Export Controls: Better Interagency Coordination Needed on Satellite Exports. NSIAD-99-182. Washington, D.C.: September 17, 1999. Export Controls: Change in Licensing Jurisdiction for Commercial Communications Satellites. T-NSIAD-98-222. Washington, D.C.: September 17, 1998 Export Controls: National Security Issues and Foreign Availability for High Performance Computer Exports. NSIAD-98-200. Washington, D.C.: September 16, 1998. Export Controls: Issues Related to Commercial Communications Satellites. T-NSIAD-98-208. Washington, D.C.: June 10, 1998. China: Military Imports From the United States and the European Union Since the 1989 Embargoes. NSIAD-98-176. Washington, D.C.: June 16, 1998. Export Controls: Change in Export Licensing Jurisdiction for Two Sensitive Dual-Use Items. NSIAD-97-24. Washington, D.C.: January 14, 1997. Export Controls: Sensitive Machine Tool Exports to China. NSIAD-97-4. Washington, D.C.: November 19, 1996. Export Controls: Sale of Telecommunications Equipment to China. NSIAD-97-5. Washington, D.C.: November 13, 1996. | In regulating exports of dual-use items, which have both commercial and military applications, the Department of Commerce's Bureau of Industry and Security (BIS) seeks to allow U.S. companies to compete globally while minimizing the risk of items falling into the wrong hands. In so doing, BIS faces the challenge of weighing U.S. national security and economic interests, which at times can be divergent or even competing. In light of the September 2001 terror attacks, GAO was asked to examine BIS's dual-use export control system. In response, GAO is reporting on BIS's (1) evaluations of and changes to the system, (2) screening of export license applications against its watchlist, and (3) actions to correct weaknesses previously identified by GAO. Lack of systematic evaluations. Although BIS made some regulatory and operational changes to the dual-use export control system, it has not systematically evaluated the system to determine whether it is meeting its stated goal of protecting U.S. national security and economic interests. Specifically, BIS has not comprehensively analyzed available data to determine what dual-use items have actually been exported. Further, contrary to government management standards, BIS has not established performance measures that would provide an objective basis for assessing how well the system is protecting U.S. interests. Instead, BIS relies on limited measures of efficiency that focus only on narrow aspects of the license application review process to assess the system's performance. BIS officials use intelligence reports and meetings with industry to gauge how the system is operating. Absent systematic evaluations, BIS conducted an ad hoc review of the system to determine if changes were needed after the events of September 2001. BIS officials determined that no fundamental changes were needed but opted to make some adjustments primarily related to controls on chemical and biological agents. GAO was unable to assess the sufficiency of the review and resulting changes because BIS officials did not document their review. Omissions in BIS's watchlist. GAO found omissions in the watchlist BIS uses to screen export license applications. This screening, which is part of the license application review process, is intended to identify ineligible parties or parties warranting more scrutiny. The omissions undermine the list's utility, which increases the risk of dual-use exports falling into the wrong hands. GAO identified 147 parties that had violated U.S. export control requirements, had been determined by BIS to be suspicious end users, or had been reported by the State Department as committing acts of terror, but these parties were not on the watchlist of approximately 50,000 names. Reasons for the omissions include a lack of specific criteria as to who should be on the watchlist and BIS's failure to regularly review the list. In addition, a technical limitation in BIS's computerized screening system results in some parties on license applications not being automatically screened against the watchlist. Some prior GAO recommendations left unaddressed. BIS has implemented several but not all of GAO's recommendations for ensuring that export controls on sensitive items protect U.S. interests. Among weaknesses identified in prior GAO reports is the lack of clarity on whether certain items are under BIS's control, which increases the risk of defense-related items being improperly exported. BIS has yet to take corrective action on this matter. |
The 1988 Justice policy on fugitive apprehension, which is still in effect, (1) designates FBI, DEA, and USMS’ apprehension responsibilities, (2) establishes specific conditions for exceptions to these responsibilities, and (3) identifies the types of fugitives that the agencies are responsible for pursuing. Generally, fugitives are considered persons whose whereabouts are unknown and who are being sought because they have been charged with one or more crimes, have failed to appear for a required court action, or have escaped from custody. (See app. II for details on the 1988 policy.) The Attorney General developed the 1988 policy in response to congressional and Justice concerns over long-standing interagency tensions and jurisdictional disputes, particularly between FBI and USMS. These situations were considered to have been adversely affecting the efficiency and effectiveness of fugitive apprehension efforts by these agencies. For example, FBI claimed that USMS’ apprehension efforts were jeopardizing the safety of FBI agents, adversely affecting FBI investigations, and duplicating work done by FBI. (See app. III for a history of FBI and USMS fugitive apprehension responsibilities.) In general, FBI and DEA, as well as other federal law enforcement agencies such as ATF, can pursue fugitives wanted for federal crimes that fall within their jurisdictions. Pursuant to the 1988 policy, however, DEA, according to DEA and USMS officials, usually transfers its responsibilities for drug crime fugitives not caught within 7 days to USMS. USMS generally is responsible for federal offenders who (1) after their initial arrest, fail to appear as required before federal courts, escape from confinement, or violate their probation or parole; (2) are wanted by federal agencies whose agents do not have arrest authority (e.g., Social Security Administration); or (3) are wanted on federal misdemeanor charges. Also, USMS and FBI are the principal Justice agencies responsible for other countries’ fugitives who are believed to be in the United States. OIAP, which was established in November 1993, is headed by a director (currently the FBI director serving dual roles) who is appointed by the Attorney General and is to be staffed with representatives from FBI, DEA, USMS, Immigration and Naturalization Service (INS), and Justice’s Criminal Division. According to FBI and USMS officials, OIAP replaced the Associate Attorney General as the mechanism provided by the 1988 policy to resolve interagency problems involving the apprehension of fugitives. The federal law enforcement agencies we contacted generally require entry of fugitive data into NCIC. This alerts other law enforcement agencies and facilitates fugitive apprehensions. For example, a fugitive wanted by FBI could be apprehended during a routine stop by local police for a traffic violation. An active entry in NCIC represents an open fugitive investigation by the entering agency. Minimally, the agency must have an arrest warrant or notice of escape for the subject and validate annually the fugitive data it has in NCIC. Appendixes IV, V, and VI provide additional information obtained from our analyses of the NCIC database regarding the percentage of federal fugitive entries or cases by agency, the general types of offenses for which federal fugitives were wanted, and the percentage of dangerous federal fugitives by agency. The law enforcement agencies’ officials we contacted, our analysis of the NCIC wanted persons database, and our review of FBI and USMS internal inspection reports all indicated that there were not extensive interagency coordination problems in fugitive apprehensions. Officials of the federal agencies we contacted, some of which had a prior history of coordination problems, all opined that, based on their experience, they did not have extensive interagency coordination problems, such as overlapping or duplicate efforts, jurisdictional disputes, or noncooperation with other agencies in the fugitive apprehension area. These officials generally did not have statistics or studies on the extent to which their respective agencies and others were pursuing the same fugitives or their fugitive cases entailed interagency problems. FBI officials, for example, said that while the fugitive area presented numerous opportunities for overlapping, redundant, and sometimes conflicting interests, interagency coordination was generally effective. Noting that any problems they had were generally with USMS, FBI officials said that the instances of problems between the two agencies had been minimal when compared to the number of fugitive investigations conducted by both agencies. USMS officials also made similar comments while adding that they had experienced some coordination problems with Treasury’s law enforcement agencies. The overall data maintained by federal agencies on their fugitive caseloads varied. NCIC represented the best source, according to FBI and USMS officials, for obtaining relative comparisons of the number and types of fugitives sought by federal agencies. Our analysis of the NCIC wanted persons database provided some confirmation of comments we obtained from the agencies’ officials in that the overall number of fugitives wanted by two or more agencies, i.e., involving overlapping jurisdictions, was not extensive. We determined that the 29,339 active federal fugitive entries in NCIC as of April 6, 1994, represented a total of 28,438 individual fugitives after adjusting for multiple entries for the same fugitive. Of the 28,438 fugitives, 727, or about 2.6 percent, were wanted by 2 or more federal agencies. Of these 727 fugitives, 705 were wanted by 2 agencies, 21 were wanted by 3 agencies, and 1 was wanted by 4 agencies. USMS and FBI were pursuing the most fugitives wanted by more than one agency. Specifically, USMS wanted 633 (about 87 percent) of the 727 fugitives , FBI wanted 316 (about 43 percent), and both FBI and USMS wanted 227 (about 31 percent). The percentages of overlap for the 22,905 fugitives whose records were removed from NCIC in 1992 and for the 23,928 fugitives whose records were removed in 1993 were 1.3 percent and 1.5 percent respectively. While the results of our analyses of NCIC are consistent with the contacted agencies’ views that interagency problems are not extensive, we could not determine the significance of the fugitive cases we found on NCIC that involved more than one agency. We could not readily identify what, if any, interagency coordination problems these cases involved, including overlapping or duplicate efforts. Nevertheless, if such problems do exist, they could jeopardize fugitive apprehension efforts, endanger law enforcement officials and the general public, and waste limited law enforcement resources. We also found no indication of extensive interagency problems in the fugitive apprehension area through our review of FBI and USMS internal inspections reports information. Both agencies require periodic internal reviews of their field offices to determine if their offices are effectively, efficiently, and adequately performing their program and administrative responsibilities. While there are some differences, these reviews by each agency are to include efforts to determine if relations are good with other federal law enforcement agencies. For example, FBI reviewers are required to interview local representatives of other federal law enforcement agencies. According to FBI and USMS officials, the resultant reports should identify significant problems and recommendations, if there are any. Documents provided by FBI showed that 19 of 52 inspections of FBI offices during fiscal years 1992 and 1993 had findings in the fugitive area. We reviewed those findings and found none that dealt with interagency problems. Documents provided by USMS showed that there were no findings on fugitive matters in the 12 inspection reports issued on USMS offices in fiscal year 1993. While the agencies we contacted did not reveal extensive interagency problems, we did identify some problems that have or could have adversely affected efforts to apprehend federal fugitives. The problem areas primarily involved FBI’s and USMS’ (1) failure to participate on each other’s task forces, (2) disagreements over responsibility for prison escapes when a conspiracy may have been involved, and (3) unwillingness at times to cooperate or withdraw from cases where both had separately been asked to assist in finding other countries’ fugitives who were suspected of being in the United States. A fourth problem area mentioned by some agencies’ officials involved subjects who became USMS fugitives after their initial arrest for violations under the jurisdiction of other agencies. FBI and USMS provided examples claiming that the other’s involvement in specific operations and failure to share information jeopardized investigative efforts or required that investigative steps or information be replicated (e.g., records of telephone calls made by known associates of the involved fugitive). FBI and USMS officials told us that problems of overlapping efforts, disputes, and noncooperation will be corrected through additional interagency agreements and through the interagency planning and coordination that is to occur in each federal judicial district in conjunction with the Justice Department’s National Anti-Violent Crime Initiative.Further, they told us in January 1995 that, contrary to when we started our review (July 1993), there recently has been a high state of cooperation and coordination between the two agencies, including the establishment of an interagency working group to address coordination problems. They attributed these changes to the (1) Attorney General and the new heads of FBI and USMS, who have made it clear that interagency duplication, disputes, and noncooperation will not be tolerated; (2) Department of Justice’s emphasis on ensuring sound use of its limited law enforcement resources; and (3) Attorney General’s establishment of OIAP. Consistent with its charter, OIAP plans to stay abreast of the agencies’ efforts to address interagency coordination problems and to intervene if necessary. Generally, over the last several years, FBI and USMS did not participate in each other’s task forces, which at times, according to FBI and USMS officials, targeted the same cities and fugitives and competed for local police participation. FBI told us that USMS had generally declined to participate in FBI-sponsored “safe streets task forces,” citing insufficient staff resources to participate on a long-term basis. For example, in November 1993, USMS staff were participating in 8 of 107 FBI-sponsored task forces. In contrast, FBI officials said that USMS had not invited FBI to participate in USMS’ fugitive investigative strike teams. These teams operated for short periods and usually involved efforts in several U.S. cities. In commenting on one such USMS effort involving violence-prone fugitives in 58 U.S. cities (“Operation Trident,” 1993), FBI officials said that Trident “created redundancy in fugitive apprehension efforts, presented problems of safety for ’Trident’ and ’safe streets task force’ personnel, jeopardized ongoing FBI investigations related to substantive FBI violations and gang investigations....” FBI officials also noted that the 1988 policy does not require that such projects be coordinated and discussed with FBI before they are implemented. In addition to their strike teams, USMS operated ongoing fugitive task forces jointly with local police in several cities. USMS officials acknowledged that FBI generally had not been invited to participate in the strike teams or task forces, given the general atmosphere of distrust and noncooperation that had existed between the two agencies. They noted, however, that in some locations there was FBI participation due to good relations between the local USMS and FBI offices. The officials cited their “gulf coast task force” (in the Houston, TX, area) to illustrate the problems they experienced with FBI over task forces. According to USMS, FBI (1) was invited to participate as an equal partner but declined to do so and (2) sought warrants for unlawful flight for some local fugitives that were already targeted by the task force. If accurate, FBI efforts to obtain such flight warrants would have been inconsistent with the 1988 fugitive policy, which provides that FBI will not seek these type of warrants if USMS is already pursuing the fugitives. FBI and USMS officials told us that they will discontinue operating independent, redundant fugitive task forces in the same geographic area. The USMS official said that USMS will not conduct any fugitive investigative strike teams unless they are requested by the Attorney General and without first seeking the participation of FBI and other agencies. The officials also noted that the interagency working group they and other Justice law enforcement agencies established is addressing duplication in the task force area as well as interagency coordination problems in other areas. For example, according to FBI officials, following the working group’s review of apparent overlap between FBI and USMS in the Houston, TX, area, both agencies instructed their respective field offices to work toward consolidating their efforts. Also, they said the interagency planning and coordination that is to occur in each federal district under the Anti-Violent Crime Initiative should provide a basis for determining what fugitive or other law enforcement task forces are needed, given the nature of the violent crime problem in each geographic area and the availability of federal and local law enforcement resources. In commenting on a draft of this report, the Department of Justice reaffirmed that the issues involving task forces are being addressed by FBI and USMS through the interagency working group. Justice stated that in cities where each agency has an operating task force, efforts are under way to combine resources, and further stated that the implementation of USMS “strike teams” has been discontinued. FBI and USMS officials noted a disagreement over responsibilities involving prison escapees. Specifically, this disagreement concerned which agency had responsibility when the escape involved a conspiracy charge, i.e., involved persons who helped plan the escape. Our analysis of NCIC data showed that, as of April 6, 1994, USMS wanted 1,680 fugitives for prison escapes. Information was unavailable, however, on how many involved possible conspiracy charges. The 1988 policy did not specifically address the conspiracy aspect of an escape case. USMS officials said that under their interpretation of the 1988 policy, USMS was generally responsible for prison escapes and related conspiracy matters. USMS officials did not view the conspiracy charge as falling within the 1988 policy provision that gave FBI the option of taking responsibility for the escape case if new charges were involved. USMS officials believed that it would be unnecessary and impractical for both agencies to be involved in the same escape case. However, FBI officials believed that FBI was better suited to address conspiracies than USMS and was, under the 1988 policy, responsible for escape conspiracies. The disagreement persisted despite a December 1991 decision by the Deputy Attorney General that FBI would be responsible for escape conspiracies. USMS officials believed that the Deputy Attorney General’s decision was based on a misunderstanding that the 1988 policy limited USMS’ role to actual escapes instead of also including conspiracy matters. Consequently, USMS did not consider the issue resolved and therefore did not change its operation to accommodate the 1991 decision. Responding to their directors’ mandates to improve cooperation, FBI and USMS officials agreed, in June 1994, on a memorandum of understanding that gave FBI responsibility to investigate conspiracies associated with escapes or escape attempts from federal facilities. USMS would have responsibility for the actual escapee unless the person had not been sentenced and was being investigated by FBI for an additional crime or in connection with an organized crime, terrorism, or national security matter. Also, the new agreement specified that USMS was to be responsible for conspiracies involving escapes or attempted escapes of sentenced federal prisoners housed under contract in state prisons or local jails, unless the situation involved a riot, hostage taking, or loss of life. USMS and FBI also agreed to “fully share information and the fruits of their respective investigations ....” The Federal Bureau of Prisons also signed the memorandum of understanding since it would ordinarily be the agency to first discover the attempted or actual escape. In commenting on a draft of this report, the Department of Justice said that disagreements between FBI and USMS over responsibilities for prison escapees have been mutually resolved through the June 1994 agreement and that the agreement has been successful to date. The interagency coordination problems experienced by FBI and USMS with foreign countries’ fugitives stemmed in part from each agency’s desire to be responsive to other countries’ requests for assistance in locating their fugitives who were suspected of being in the United States, according to officials from both agencies. The 1988 policy generally assigned responsibility for these fugitives to USMS. Exceptions were when an FBI foreign office (legal attache) was directly contacted by the host country or if the case involved various other special circumstances, e.g., FBI was also seeking a foreign fugitive on an arrest warrant for a U.S. crime. Usually, USMS was to receive cases when countries requested aid through the U.S. National Central Bureau (USNCB). FBI and USMS officials said that problems generally arose when countries made requests or contacts through either USNCB or OIA and also through an FBI legal attache. In these instances, neither USMS nor FBI were willing to let the other take exclusive responsibility once they discovered that the other was involved. Officials at USMS, FBI, OIA, and USNCB all mentioned the following case to illustrate the types of problems that can occur between FBI and USMS. USMS developed a lead on the possible location (California) of a Swiss national wanted in connection with a robbery in Geneva. Responding to USMS inquiries through USNCB, the Swiss authorities advised USMS that they were not requesting an arrest warrant at that time but would be requesting that the subject be interviewed and his residence searched in the future with Swiss police involvement. According to USMS officials, they were subsequently advised by OIA and USNCB that an arrest warrant had been requested and advised by FBI that FBI was working on the case pursuant to a Swiss request made through the FBI legal attache in Switzerland. A difference of opinion existed between USMS and FBI concerning the case and who had jurisdiction; representatives of both agencies met to discuss the matter. According to USMS and USNCB (1) FBI wanted sole jurisdiction because of the direct contact made by Swiss authorities, even though USMS had been working on the case for about a year; (2) although it was mutually agreed that the two agencies would work on the case jointly, FBI continued its efforts, including interviewing the subject, without coordinating with USMS; (3) FBI did not give timely notice to USMS when FBI and Swiss police subsequently went to the subject’s location in California with an arrest warrant; (4) the arrest warrant could not be served because the subject had apparently fled the state after being earlier interviewed by FBI; and (5) FBI later arrested the subject in Las Vegas, NV, without informing USMS. According to FBI (1) it maintained liaison with USMS, (2) USMS officials were notified, but arrived late, for the initial arrest effort by FBI and Swiss police in California, and (3) USMS demonstrated little interest once it was determined that the subject’s location was unknown. Further, according to USMS and OIA, FBI’s aforementioned interview of the subject occurred unexpectedly while FBI independently was conducting a preliminary search for the subject without an arrest warrant. USMS and FBI officials said problems similar to this example only involved a few cases a year. USMS and FBI officials believed that, given the overall emphasis by their directors and OIAP on improved cooperation, they would avoid further problems in the future. In addition, OIA and USMS officials told us that OIA in August 1994 established a fugitive unit to coordinate and monitor activities involving other countries’ fugitives or U.S.-fled fugitives. USMS also planned to assign its foreign fugitive coordinator to OIA to further improve coordination. In commenting on a draft of this report, the Department of Justice stated that OIA, in conjunction with FBI and USMS, will ensure that no duplicative efforts are pursued. The Department also stated that FBI and USMS will continue working together to avoid needless overlap and to ensure effective use of resources. The Treasury Department, in commenting on a draft of this report, said that Customs Service and the Internal Revenue Service are supporting the OIA effort with respect to high profile fugitives. Treasury also stated that the Financial Crimes Enforcement Network will assist OIA by searching their databases for leads on fugitives. According to USMS officials and our analysis of NCIC data, overlapping efforts often involved USMS because a previously arrested offender subsequently became a fugitive based on an obstruction of court charge.These fugitives initially were the responsibility of the law enforcement agencies having jurisdiction over the crimes for which the offenders were earlier arrested. Our analysis of NCIC data showed that 418 fugitives, or about 57 percent of the 727 NCIC fugitives wanted by more than one federal agency on April 6, 1994, involved USMS court obstruction charges. These 418 fugitives composed about 5 percent of the 8,814 fugitives wanted by USMS for court obstruction. USMS officials said that interagency problems in this area often involved offenders wanted by agencies not covered by the 1988 Justice fugitive policy. Particularly, problems involved fugitives wanted by USMS for a court obstruction charge who also were wanted by Treasury law enforcement agencies. Our analyses of the 418 NCIC fugitives showed that of these, 194, or about 46 percent, were also wanted by Treasury law enforcement agencies. Of these 194, 179 fugitives were wanted by Treasury agencies for offenses other than court obstruction. However, we could not determine how many of the 179 fugitives were being pursued by Treasury agencies on the basis of their original responsibility for the fugitives and the subsequent court obstruction charge or on the basis of the fugitives being wanted for additional crimes. A USMS official believed that such cases generally did not involve additional crimes, whereas a Customs Service official believed they did. Treasury and USMS officials said that they usually coordinated their investigations on overlapping cases. However, according to USMS officials, some duplication of effort still occurred because each agency generally conducted its own separate investigation and contacted the other agency only after a lead had proven to be successful. The USMS spokesperson said that the duplicated efforts between USMS and another agency to apprehend the same fugitives were a waste of resources and could have impeded both agencies’ fugitive investigations. Furthermore, problems in this area may be increasing. In December 1993, a Customs Service official told us Customs considered court obstruction fugitives to be primarily USMS’ responsibility. However, in September 1994, USMS officials informed us that Customs Service had recently begun pursuing more of these fugitives. In November 1994, a Customs Service official told us that Customs was updating its policy guidance on fugitive apprehension and that it would address court obstruction fugitives. USMS officials told us that they planned to resolve their interagency coordination problems in this area through discussions with the involved agencies and, if possible, by securing interagency agreements. If unsuccessful, they planned to seek OIAP’s assistance. ATF, Customs Service, and Secret Service officials subsequently told us that they were cooperating with USMS to resolve interagency coordination problems. In commenting on a draft of this report, the Treasury Department stated that Customs Service will retain responsibility for court obstruction fugitives, given the general complexity and international nature of Customs’ investigations. We discussed Treasury’s comment with Customs and USMS officials. USMS officials stated that they had met with the Treasury agencies and believed they had reached agreement that court obstruction fugitives in general would be USMS’ primary responsibility. They said that they would be meeting again with the agencies to ensure that there is no disagreement or unresolved issue. A Customs Service official told us that many of the fugitives in question are integral parts of ongoing Customs’ investigations and should continue to be pursued by Customs. He noted that Customs and USMS have a history of good relations and cooperation and that he expects that they will resolve in future discussions any differences they might still have. We also noted from our NCIC data analysis that, as of April 6, 1994, FBI wanted 140, or about 33 percent, of the 418 fugitives wanted by USMS for obstruction of court charges. Neither USMS nor FBI officials knew precisely why they had such overlapping cases since the 1988 policy defined the circumstances under which either FBI or USMS would assume responsibility for each case. An FBI official said that this overlap was caused, in part, by conflicts between one of their field offices and the local USMS office. However, FBI and USMS officials believed that any problems experienced in this area would be corrected, given the mandates they were under to improve coordination. Although the officials were not specific as to how these problems would be corrected, they did not believe that any systemic or procedural changes would be needed. The Attorney General established OIAP as a partial response to the Vice President’s National Performance Review task force’s recommendations for improving the coordination and structure of federal law enforcement agencies. OIAP’s overall mission is to (1) improve coordination among Justice’s criminal investigative agencies, (2) reduce interagency duplication, (3) resolve issues where there is overlapping jurisdiction, (4) facilitate better use of investigative resources, and (5) advise the Attorney General on administrative, budgetary, and personnel matters involving these agencies. Among other things, OIAP’s charter specifically called for it to establish procedures for coordinating fugitive apprehensions and to perform other functions in the fugitive area, as necessary, for effective policy coordination and elimination of waste and duplication. An OIAP spokesperson told us that when OIAP was considering how to address the matter of fugitive coordination problems, it agreed to a request by senior FBI and USMS officials to defer OIAP involvement and allow the two agencies to first address these problems. He also said that fugitive matters were being addressed to some extent as part of the interagency cooperation required in connection with Justice’s Anti-Violent Crime Initiative, particularly the interagency working group. Although the official did not have any specific timeframe, he noted that OIAP’s deference would not continue indefinitely in the absence of positive results and that the agencies are expected to resolve the problems in reasonable time. He also said that OIAP has stayed abreast of the agencies’ efforts through a variety of ways and will continue to do so. In addition to its role in ensuring that fugitive matters are properly coordinated among the various Justice law enforcement agencies, OIAP represents a means for Justice to ensure that the current alignment of fugitive apprehension responsibilities among its agencies is the most efficient and effective use of federal law enforcement resources. The alignment of responsibilities has evolved over the years, in part, as a result of efforts to resolve intermittent interagency coordination problems. Consequently, this alignment has led to a division of responsibilities, which could cause interagency problems and may or may not represent the most efficient and effective use of resources. This matter has not been systematically examined. Given its charter and with representatives from all Justice criminal investigative agencies, OIAP is positioned to help ensure that the current division of fugitive responsibilities among Justice agencies is well founded and results in the most efficient and effective use of resources. Also, by involving representatives from key non-Justice agencies, OIAP could help to ensure that efficient and effective use of limited law enforcement resources and fewer interagency coordination problems occur across the federal government. Although the division of fugitive responsibilities as it has evolved may be appropriate, no systematic assessment of the current alignment of these responsibilities has been conducted to determine whether the differences or inconsistencies are well founded and represent the most efficient and effective use of law enforcement resources. For example, under Justice’ 1988 fugitive policy (see app. II), the FBI is responsible for fugitives wanted on arrest warrants for crimes that are within FBI’s jurisdiction and for any of these fugitives who, after arrest, flee prior to adjudication of guilt. On the other hand, DEA may, and usually does, delegate its arrest warrant fugitives not caught within a short period to USMS. However, unlike the FBI, DEA does not have responsibility for any of its postarrest fugitives who flee prior to adjudication of guilt, unless new charges are involved and DEA elects to take responsibility. Other federal agencies generally have retained responsibility for fugitives wanted on their arrest warrants, but, to varying degrees, they have deferred responsibility to USMS for postarrest fugitives. Also, Customs Service agents may, under Customs’ policy, refer any fugitive cases to USMS after passage of a reasonable time. The law enforcement agencies we contacted generally believed that investigative work and fugitive apprehension are distinct functions. ATF and Customs Service officials acknowledged that fugitive apprehension was secondary to their primary responsibility of conducting investigations. However, none of the representatives of the agencies, other than DEA, expressed or indicated any interest in giving up their basic responsibility for fugitives prior to the initial apprehension. They believed that apprehension was a logical part of their investigative case preparation responsibilities. They believed that they were most informed about possible locations or known associates that could lead to quick apprehensions. FBI officials noted that their agents were as capable as USMS’ deputy marshals in pursuing fugitives. A Customs Service official stated that, due to the complexity of Customs’ investigations and the sophistication of many of their fugitives, Customs case agents are the best people to pursue Customs’ fugitives, who are often an integral part of a larger Customs’ investigation. The Treasury agencies had mixed views, however, on their responsibilities for arrested offenders who subsequently become fugitives. As noted earlier, USMS is seeking agreements with other agencies on what responsibilities they and USMS will have regarding postarrest fugitives. DEA and USMS officials noted that assigning DEA fugitive responsibilities to USMS makes sense given that USMS’s deputy marshals are trained and experienced at fugitive apprehension. DEA staff are then available to work exclusively on drug investigations. Under the 1988 policy, DEA may delegate responsibility to USMS if the fugitive is not caught within 7 days after issuance of the arrest warrant. DEA officials noted that the 7-day requirement gave them time to follow up on any “hot leads” as to the possible location of the fugitive and helped to ensure that the delegation process did not hinder the apprehension of fugitives. For example, according to USMS, DEA arrested 2,601, or about 44 percent, of all DEA fugitives caught in fiscal year 1993. According to a USMS official, most of these arrests were fugitives who had not been delegated to USMS and who were caught shortly after the issuance of their arrest warrants. The basis for these policy differences among USMS, FBI, and other agencies and their relative efficiency and effectiveness are issues that could be considered in any examination of federal fugitive apprehension responsibilities. Besides looking at the agencies’ specific responsibilities for arrest warrant and postarrest fugitives, such an examination also might include determining whether a single agency, such as USMS or FBI, should have responsibility for fugitives in general who have remained in fugitive status for a specified time, i.e., where all leads have been exhausted and no active apprehension efforts exist. In this regard, we noted that many fugitives go unapprehended for long periods. For example, about 61 percent of the 29,339 federal fugitive entries in the NCIC database as of April 6, 1994, were for fugitives who had been wanted for 2 years or longer. In addition, consideration also could be given to how changes in fugitive apprehension responsibilities among the agencies would affect their other responsibilities or federal law enforcement in general. An OIAP spokesperson acknowledged the differences in the division of responsibilities for fugitive apprehension and told us that it might be appropriate for OIAP to address the overall issue of these responsibilities at some future time. He said that such an effort would be consistent with OIAP’s charter. He noted, however, that while OIAP has had several successful initiatives, it is just beginning to develop credibility and has to work through the distrust that has built up among the various agencies over the years. He said that to successfully review, and perhaps recommend changing the current alignment of fugitive responsibilities, OIAP must first have a high level of credibility with the affected law enforcement agencies. He also noted that whether OIAP would conduct such an examination would depend upon the facts, other priorities facing OIAP, and the availability of resources at the time. Although OIAP has no jurisdiction over the fugitive responsibilities of non-Justice agencies, the OIAP spokesperson said that agencies, such as the Treasury Department’s law enforcement agencies, might formally participate at the OIAP executive level at some future time. He noted that non-Justice agencies already have been involved in some OIAP initiatives at the working group level. For example, ATF was participating in an OIAP working group on the Anti-Violent Crime Initiative. He also said that OIAP can and would encourage the Treasury agencies to work out any problems they have with the Justice agencies. Indications are that the percentage of fugitive cases involving interagency coordination problems, such as interagency duplication, jurisdictional disputes, and noncooperation, is not large. Nevertheless, there have been instances that agency officials said have or potentially could have adversely affected their efforts to apprehend federal fugitives. Officials from the principal agencies involved—FBI and USMS—believe that the problems will be sufficiently addressed as a result of (1) specific efforts they have made or will make to resolve problems, (2) the planning and coordination that will be done under Justice’s Anti-Violent Crime Initiative, (3) mandates from the Attorney General and their agency heads that interagency squabbles and noncooperation will not be tolerated, and (4) the establishment of OIAP. In addition, USMS officials are taking steps to resolve problems involving non-Justice agencies through direct negotiations, and, if unsuccessful, plan to request assistance from OIAP. OIAP was established, in part, to improve interagency coordination and eliminate waste and duplication in the fugitive area. In this regard, OIAP plans to continue staying abreast of the agencies’ efforts to address interagency coordination problems and expects the agencies to do so in a reasonable amount of time. In view of the actions being taken by FBI, USMS, and OIAP, we are not making any recommendations. OIAP also represents a unique opportunity to determine if the alignment of fugitive responsibilities among Justice and non-Justice agencies represents efficient and effective use of limited law enforcement resources. The current alignment of responsibilities has evolved over the years, in part, as a result of efforts to resolve intermittent interagency coordination problems. Consequently, this has led to differences in responsibilities that may or may not represent the best use of resources. OIAP has acknowledged differences in the division of fugitive responsibilities and may, once it has established itself as a credible interagency management group, look into the issue of fugitive responsibilities among agencies. Such an examination would then depend upon the facts existing at that time and OIAP’s other priorities. We believe that this is a reasonable approach and consequently are not making any recommendation on this matter. The Justice Department and the Treasury Department provided written comments on a draft of this report. These comments are presented in appendixes VII and VIII. Overall, the agencies agreed that there are not extensive interagency conflicts or coordination problems. Justice also reiterated that appropriate corrective actions have been or will be taken to address the interagency coordination problems that have occurred. Justice specifically mentioned actions relating to task forces, foreign fugitives, and prison escapes. Treasury specifically referred to assistance being provided to Justice’s OIA and to Customs Service’s responsibility for persons who flee after their initial arrest. These comments are noted earlier in this report. Justice said that interagency disputes will not be allowed to affect its efforts to pursue federal fugitives and that any disputes that arise will be handled through interagency discussion, cooperation, and departmental oversight. Justice stated that it remained vigilant in its efforts to reduce or minimize instances that could jeopardize fugitive apprehension efforts, endanger law enforcement officials and the general public, or waste limited law enforcement resources. In this regard, Justice noted that its fugitive programs are continually reviewed by the responsible agencies to minimize any inefficiencies or duplication. With regard to working with non-Justice agencies, Justice reiterated that OIAP does not have any jurisdiction over these agencies. Justice noted that consequently any discussion of Treasury Department participation in the OIAP process would require Treasury’s consent. We recognize that participation by non-Justice agencies with OIAP would be voluntary and note that ATF is already cooperating with OIAP in connection with Justice’s Anti-Violent Crime Initiative. Moreover, in its comments, Treasury reiterated that it had revitalized the Treasury Enforcement Council as a means, similar to OIAP, for providing enforcement agency coordination and for addressing specific enforcement issues. We believe that, through OIAP and the Enforcement Council, Justice and Treasury should be able to enhance interdepartmental cooperation in the fugitive area, as well as other areas, and surface and resolve any coordination problems such as those discussed in this report. Furthermore, OIAP and the Enforcement Council represent the means for Justice and Treasury to ensure the interagency cooperation that would be needed for any future review of whether the alignment of fugitive apprehension responsibilities among the involved agencies is the most effective and efficient use of their law enforcement resources. We are sending copies of this report to the Attorney General; the Secretary of the Treasury; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. The major contributors are listed in appendix IX. Should you need additional information on the contents of this report, please contact me on (202) 512-8720. As a result of its study of two specific fugitives, the former House Government Operations Committee’s Subcommittee on Government Information, Justice, and Agriculture was concerned about the overall effectiveness of the Department of Justice’s 1988 policy in resolving interagency rivalries and problems in fugitive apprehensions. This policy identifies the fugitive responsibilities of FBI, DEA, and USMS and establishes conditions for exceptions to these responsibilities. The chairmen of the Committee and Subcommittee requested that we review the policy. We agreed with the requesters to focus on determining (1) the extent and nature of any interagency coordination problems among FBI, DEA, and USMS and other federal agencies involved in fugitive investigations and (2) if such problems existed, what actions had been or could be taken to address them. Coordination problems could include unnecessary duplicate or overlapping efforts, jurisdictional disputes, and noncooperation that could adversely affect the efficiency or effectiveness of efforts to apprehend fugitives. To accomplish our objectives, we interviewed officials and reviewed various documents obtained from FBI, DEA, and USMS; the Treasury Department’s ATF, Customs Service, and Secret Service; Justice’s USNCB, Criminal Division, Executive Office of United States Attorneys, and OIAP; and the State Department. The Treasury agencies were not part of the 1988 Justice policy. USMS officials identified these agencies as those agencies outside of the Justice Department with which it was likely to have overlapping efforts or interagency disputes in the fugitive apprehension area. We contacted officials of Justice’s USNCB and the Criminal Division’s OIA, and the State Department for their perspectives on interagency problems involving international fugitives. We contacted officials of the Executive Office of United States Attorneys for any overall perspectives U.S. attorneys might have on interagency problems. We contacted OIAP officials to identify ongoing actions and plans for addressing interagency fugitive matters. We asked the designated spokesperson(s) of each agency, among other things, about the nature and extent of any interagency problems their agencies may have experienced with other agencies in the fugitive area. We requested any studies or reports they had on interagency fugitive activities and related problems. Documentation obtained from FBI and USMS included (1) policy guidance and descriptive information on their fugitive activities; (2) brief descriptions of fugitive cases they selected, at our request, to illustrate both good and poor interagency interactions; (3) statistics on their fugitive caseloads and accomplishments; (4) sections of reports dealing with internal reviews, inspections, or other studies of fugitive matters; (5) memorandums of understanding, agreements, and policies on the coordination and division of fugitive apprehension responsibilities among federal agencies; and (6) various other documents illustrating interagency relations and problems in the fugitive area. Documentation obtained from DEA and the other contacted agencies generally was limited to policy guidance on their fugitive roles and operations and related statistics. These agencies had far fewer fugitive caseloads than FBI and USMS. It is possible that the representatives of the agencies we contacted might not have been inclined to point out problems their agencies had with other agencies. They were, however, sometimes critical of another agency in one or more specific areas. Further, they were generally consistent in noting that interagency relations were good overall and in identifying the areas where problems occurred. In view of their consistent views and the establishment of OIAP to address interagency problems, we did not attempt to specifically identify the extent of problems. Instead, we performed two limited analyses to provide some assurance that the perspectives provided by the agencies’ representatives were reasonable. First, we reviewed the NCIC wanted persons database for indications of the overall extent and types of federal fugitives wanted by more than one federal agency. We did not determine the extent to which such fugitive cases involved any interagency disputes, noncooperation, or duplication beyond the minimum work needed by an agency to keep a fugitive record on NCIC and to maintain an open case file. We analyzed NCIC entries for (1) persons wanted as of April 6, 1994, and (2) wanted persons whose records were removed from NCIC during calendar years 1992 and 1993 to determine if two or more agencies had entered the same fugitive in NCIC. While we have no assurance that NCIC included all of the agencies’ fugitives, every agency we contacted had a policy requiring entry into NCIC. Based on what the agencies told us, we determined that NCIC was the best source for identifying their fugitives as well as fugitives wanted by more than one agency. However, according to FBI and USMS officials, some federal fugitives generally are not entered into the NCIC wanted persons file and would not be identified in our analysis to determine overlapping fugitive efforts. For example, according to USMS officials, other countries’ fugitives suspected of being in the United States are not entered in the NCIC wanted persons database unless there is an arrest warrant authorized by OIA. To identify the extent to which different federal agencies entered the same fugitive data into NCIC, i.e., fugitive matters involving overlapping jurisdictions, we conducted a four-stage computer analysis. We excluded all test entries and temporary warrant entries from our analyses. Entries made by two or more offices within the same agency were not counted as duplicate entries. We used the same analysis scheme for fugitives wanted as of April 6, 1994, as we used for fugitives removed from NCIC in 1992 and 1993. We discussed our methodology with USMS and FBI officials who generally agreed that it was a reasonable approach to identifying overlapping fugitive cases. NCIC contains various identifying data on each fugitive. In the first stage of matching, we identified multiple entries using FBI numbers. An FBI number is unique to an individual and is assigned to all persons for whom FBI receives fingerprint cards. Consequently, the FBI number was the most reliable identifier of an individual in the NCIC system. However, not all fugitives on NCIC had an FBI number. In the second stage, for entries without such numbers, but with social security numbers, we identified multiple entries with identical social security numbers. In the third stage, we compared nonmatching entries from the first stage that had a social security number with nonmatching entries from the second stage. The fourth stage involved entries without an FBI or social security number. We matched these entries using name and birth date. In addition to analyzing NCIC data, we reviewed policy guidance and various parts of reports on FBI and USMS internal inspections of their offices to determine what, if any, interagency problems were found in the fugitive area. In this regard, USMS officials provided us with copies of sections on fugitives from the 12 inspection reports they said were issued in fiscal year 1993; each involved a district office headed by a U.S. marshal. FBI officials provided us with inspection information from fiscal years 1992 and 1993. According to this information, 19 of 52 reports issued on FBI headquarters, field offices, and overseas offices during fiscal years 1992 and 1993 contained findings on fugitive matters. They provided us with copies of the findings sections of those reports. Given the nature and size of the FBI and USMS fugitive programs, we did not examine similar reviews conducted by the other federal law enforcement agencies we contacted. Any major problems they had would likely have involved FBI or USMS and be reflected in those agencies’ reports. We further analyzed the information obtained from NCIC, interviews, and documents provided by the agencies to better identify the types of problems that occurred, their causes, actual or potential effects, and needed corrective actions. We queried FBI, OIAP, and USMS officials on and sought documentation of plans to implement needed corrective actions. We also relied on NCIC data to obtain general comparisons between federal agencies on the number and type of their fugitive caseloads. These comparisons could not be made using the data regularly maintained and provided by the agencies on these caseloads because the level of information varied significantly among them. USMS regularly maintained a database of its warrants from which it could provide information on the number, type (e.g., DEA, bond default), and disposition (e.g., USMS arrested or other agency arrested) of the fugitives for whom it had apprehension responsibility. However, the same level of information on fugitive caseloads was not available from FBI and other law enforcement agencies, such as Customs Service and ATF. The focus of these agencies’ efforts and management systems is on investigating crimes that fall within their jurisdiction. These investigations do not always involve pursuits of fugitives. Thus, their databases generally could provide information on the number and type of their investigations (e.g., fraud, organized crime, and smuggling), but did not specifically track the number, type, and disposition of their fugitive efforts. Although FBI could provide some information on the number and type of escaped federal prisoners and military deserters it wanted and state and local fugitives it wanted under the unlawful flight program, FBI, ATF, and Customs Service generally relied on NCIC data to obtain current information on the number and type of fugitives they were pursuing. Since the fugitive data we sought was unavailable from FBI, ATF, and Customs Service, we did not determine the level of information available from Secret Service. Justice and Treasury provided written comments on a draft of this report. These comments are reprinted in appendixes VII and VIII and are incorporated in the report as appropriate. Our work was performed from July 1993 to January 1995 in accordance with generally accepted government auditing standards. Responsible for cases involving FBI investigations. May delegate those cases from DEA investigations to USMS if the fugitive is not caught in 7 days. May get delegation of authority from DEA or lead agency in task force investigation. For joint FBI/DEA and multiagency task force investigations, the lead agency decides whether to keep or give the case to USMS. May take back these cases if new charges are involved. Responsible for FBI cases. May elect responsibility for DEA case if new charges are involved. Responsible for DEA cases unless new charges are involved and DEA elects responsibility. If electing responsibility, DEA must provide written notice to USMS. Responsibility becomes effective 7 days after notification is received, with interim efforts to be coordinated between DEA and USMS. Responsible when FBI case involves counter-intelligence, organized crime, terrorism, or new charges. May elect responsibility for DEA case if new charges are involved. Responsible after judgment of guilt with noted exceptions (see FBI and DEA). If electing responsibility, FBI must provide written notice to USMS. Responsibility becomes effective 7 days after notification is received, with interim efforts to be coordinated between FBI and USMS. If electing responsibility, DEA must provide written notice to USMS. Responsibility becomes effective 7 days after notification is received, with interim efforts to be coordinated between DEA and USMS. Must notify original agency of the violation. May ask to be involved after a 7-day period. If denied, may appeal within the 7 days to Associate Attorney General, who is to decide within 48 hours. Agencies are to coordinate in the interim. Responsible when FBI case involves counter-intelligence, organized crime, terrorism, or new charges. May elect responsibility for DEA case if new charges are involved. Responsible with noted exceptions (see FBI and DEA). Must notify original agency of the escape. If electing responsibility, FBI must provide written notice to USMS. Responsibility becomes effective 7 days after notification is received, with interim efforts to be coordinated between FBI and USMS. If electing responsibility, DEA must provide written notice to USMS. Responsibility becomes effective 7 days after notification is received, with interim efforts to be coordinated between DEA and USMS. (continued) Responsible for pursuing these types of fugitives, but is not to seek unlawful flight warrant if USMS is already pursuing the fugitive because of an escape, a bond default, or a violation of probation, parole, or mandatory release conditions. Is to be told by USMS of state or local interest in DEA-pursued fugitive. May provide information to state and local governments about their fugitives. Formal pursuit is to be done by FBI under unlawful flight statutes, except for special programs such as USMS task forces that are to be approved by the Associate Attorney General. Is to notify USMS of state/local government requests for unlawful flight aid when USMS special programs are involved (see USMS) and notify state/local government authorities if USMS is already pursuing the fugitive. Is to notify state/local governments if fugitive is apprehended. Is to be told by USMS of state or local interest in FBI-pursued fugitive. If state/local governments ask for USMS aid for fugitive being pursued by FBI or DEA, USMS is to refer the requester to FBI/DEA and notify FBI/DEA of the state/local request. Responsible if the case (1) involves counter- intelligence, organized crime, or terrorism; (2) is an investigation currently being conducted at the request of the concerned foreign government; (3) involves a fugitive FBI is seeking on an arrest warrant for a federal offense; or (4) involves a referral made exclusively to FBI via an FBI country attache. Responsible if the case involves a fugitive who is the subject of a DEA investigation that is currently being conducted at a foreign government request or when it exclusively is referred to DEA via a DEA country attache. Responsible for all cases except those that are the responsibility of FBI or DEA, and cases that USNCB refers to other agencies, such as Immigration and Naturalization Service and state/local governments, as appropriate. If a request is received directly from a foreign government, DEA is to notify USNCB to determine if other requests have been made and the case is being worked on by other agencies. If the request is received directly from a foreign government, FBI is to notify USNCB to determine if other requests have been made and the case is being worked on by other agencies. If a request is received directly from a foreign government, USMS is to notify the USNCB to determine if other requests have been made and the case is being worked on by other agencies. (Table notes on next page) Note 1: USMS is to advise any federal agency seeking its help on a fugitive if FBI or DEA are already involved. If an agency insists on USMS aid, USMS is to notify FBI or DEA, which is to defer to USMS or assert need for their continued work. If the other federal agency does not accept deferral to FBI or DEA, then all parties are to confer and go to the Associate Attorney General, if not resolved. Note 2: This policy does not preclude an agency from delegating any case(s) to USMS or vice versa. These cases involve persons for whom federal agencies hold arrest warrants but cannot find. These cases involve persons who default on bond or fail to appear in court. These cases involve probation, parole, and conditional or mandatory release violators. These cases involve violations which are, as a group, referred to as the federal Escape and Rescue Statutes. These cases involve state/local fugitives who have been charged with federal crime of unlawful flight. These cases involve other countries’ fugitives sought in the United States. Prior to 1979, USMS’ fugitive apprehension efforts were limited to those cases referred specifically by the courts or undertaken as thought appropriate by individual U.S. marshals. In 1979, at the request of FBI, the Attorney General transferred primary responsibility to USMS for fugitive cases involving federal prison escapes, bond defaulters, and parole and probation violators. The intention was to free FBI resources for higher priority work, such as organized crime investigations. These changes were agreed to by FBI and USMS and presented in a memorandum of understanding. In 1982, FBI sought to regain responsibility for any such USMS fugitives who had originally been the subject of an FBI investigation or who had committed additional crimes that fell under FBI’s responsibility. USMS, in return, asked that FBI transfer responsibility for the unlawful flight fugitive program to USMS. Neither agency agreed to the other’s proposal, and the division of responsibilities between the two remained as defined in the 1979 agreement. However, in 1982, an agreement between FBI and DEA gave DEA the option of delegating to FBI responsibility for DEA’s “significant” fugitives (from DEA class 1 and class 2 drug cases). This was one result of a debate over whether FBI should take over DEA and assume responsibility for federal drug law enforcement. Although there was no formal agreement, DEA also turned over to USMS responsibility for some of its lower priority fugitives (from DEA class 3 and 4 drug cases). In 1986, we reviewed the feasibility of transferring responsibility for FBI’s unlawful flight program to USMS. This review was in response to congressional concerns over jurisdictional disputes between FBI and USMS, whether USMS could perform the responsibility more cheaply than FBI, and whether FBI resources could be better used on higher priority matters. Given the general lack of data, e.g., cost of individual fugitive investigations, we reported that there were no clear-cut answers about whether the program should be transferred. We said that the matter appeared to be a policy decision for the administration or Congress. In 1987 and early 1988, disputes between USMS and FBI over fugitive apprehension responsibilities again received congressional attention. FBI claimed that USMS’ fugitive efforts were jeopardizing the safety of FBI agents, adversely effecting FBI investigations, and duplicating work done by FBI. USMS responded that these claims were unsupported and that FBI wanted USMS to be subservient to FBI. The Attorney General told the interested congressional committees that he would correct the problems, and the result was the August 1988 Department of Justice policy on fugitive apprehensions (see app. II). The 1988 policy and its effectiveness in eliminating interagency problems came into question during the House Government Operations Subcommittee hearings on two high profile fugitives. These hearings led to the request for our review and this report. Note 1: Defense Department includes entries by the U.S. Army, U.S. Navy, U.S. Marines, U.S. Air Force, and their investigative agencies. Note 2: Treasury Department includes entries by ATF, Customs Service, Internal Revenue Service, and Secret Service. Note 3: Other agencies include entries by 12 different federal agencies/departments. Note 4: Does not add to 100 percent due to rounding. Court (e.g., failure to appear, bail/bond default) Note 1: Dangerous fugitives include those entries with caution notations on their records. According to FBI and USMS officials, a caution notation generally means that the fugitive should be considered dangerous. About 30 percent of all NCIC entries contained caution notations. Note 2: Other agencies include entries by 22 different agencies/ departments. Carl Trisler, Evaluator-in-Charge Amy Lyon, Evaluator David Alexander, Senior Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Justice's 1988 federal fugitive apprehension policy, focusing on interagency coordination problems and agencies' efforts to address those problems. GAO found that: (1) officials from all federal agencies involved in fugitive apprehension stated that they did not have extensive interagency coordination problems, overlapping or duplicate efforts, or jurisdictional disputes; (2) none of the agencies had empirical data on the 727 fugitives who were wanted by more than one agency; (3) interagency coordination problems could jeopardize fugitive apprehension efforts, endanger law enforcement officials and the general public, and waste limited law enforcement resources; (4) some interagency coordination problems such as the Federal Bureau of Investigation's (FBI) and the U.S. Marshals Service's (USMS) failure to participate in each other's fugitive task forces, disagreements over responsibility for prison escapes involving possible conspiracy charges, and agencies' failure to cooperate with the apprehension of other countries' fugitives adversely affected the effectiveness of federal fugitive apprehension efforts; (5) FBI and USMS have taken actions to deal with interagency problems to improve coordination and eliminate duplication; and (6) Justice established the Office of Investigative Agency Policies to resolve coordination problems, ensure efficiencies in overlapping efforts, and determine whether fugitive responsibilities are properly aligned among agencies. |
Technology development partnerships are key elements of the technology transfer program of each NNSA laboratory and production facility. NNSA laboratory and facility managers told us that they have primarily used the following types of partnerships: CRADAs: An NNSA laboratory or production facility and private partner(s) agree to collaborate on a research project that is consistent with DOE’s mission and has a potential impact on U.S. economic competitiveness. The NNSA laboratory or production facility and its private partner(s) contribute personnel, services, facilities, equipment, intellectual property, and/or other resources to the CRADA project. The private partner(s) may also provide funding, in-kind (noncash) contributions, and other resources directly beneficial and specifically identifiable and necessary in the performance of the project. However, NNSA and its laboratory or production facility are not allowed to transfer funds to the private partner(s). At a minimum, DOE retains a nonexclusive, nontransferable, irrevocable license to use any invention developed under the CRADA on behalf of the U.S. government. The private partner has the option to choose an exclusive license for a pre-negotiated field of use for any inventions developed by the NNSA laboratory or production facility under the CRADA. Technical assistance for small businesses: In response to section 3135(b) of the National Defense Authorization Act for Fiscal Year 1993, NNSA’s laboratories and production facilities have provided technical assistance to small businesses. Work-for-other agreements: An NNSA laboratory or production facility agrees to conduct a defined scope of work or list of tasks, and the private partner pays for the entire cost of the project. While intellectual property rights are negotiable, the private sponsor typically retains title rights to any inventions. Cost-shared procurement contracts: An NNSA laboratory or production facility and private partner(s) agree to collaborate to develop technologies or computer codes for Defense Program mission requirements. Lawrence Livermore National Laboratory has used these contracts for the Accelerated Strategic Computing Initiative. Technology licensing agreements: An NNSA laboratory or production facility grants a business an exclusive or nonexclusive license to use its intellectual property in return for a licensing fee and/or royalties. User facility agreements: An NNSA laboratory or production facility permits outside organizations to use its unique research equipment and/or facilities to conduct research. The private organization pays the full cost of using research equipment or facilities and retains title rights to any intellectual property. In response to the phasing out of dedicated funding for partnerships, NNSA’s laboratories and production facilities have reduced their CRADAs and technical assistance to small businesses while entering into more agreements that are fully funded by the business partners. The total number of CRADAs at NNSA laboratories and production facilities has declined by more than 60 percent, from a high of 639 in fiscal year 1995 to 244—including only 21 new CRADAs—in the first 6 months of fiscal year 2001. During this period, DOE’s funding for CRADAs dropped even more— from $222 million to $19 million. Similarly, technical assistance for small businesses dropped from about 1,700 actions that assisted small businesses in fiscal year 1995 to 136—including only 59 new assistance agreements—in the first 6 months of fiscal year 2001. While these types of partnerships have declined, work-for-other agreements and technology licenses, which require no DOE funds, grew substantially. (Table 4 in app. II provides partnership data by fiscal year for each NNSA facility.) Table 1 shows that the number of active CRADAs at the NNSA laboratories and production facilities grew rapidly in the early 1990s and then dropped by more than half through the first 6 months of fiscal year 2001. This trend reflects a similar pattern in the growth and decline of DOE’s dedicated funding for technology partnerships. Sandia National Laboratories has entered into more CRADAs than any other NNSA laboratory. (See table 5 in app. II.) In fiscal year 1995, when CRADA activity peaked, Sandia had 254 active CRADAs—40 percent of all NNSA CRADAs. Sandia participated in 153 CRADAs (44 percent of all CRADAs) in fiscal year 2000 and 120 CRADAs (about 50 percent of all CRADAs) in the first half of fiscal year 2001. The number of CRADAs at Lawrence Livermore National Laboratory has dropped even more—from 159 in fiscal year 1995 to 26 in the first two quarters of fiscal year 2001. Lawrence Livermore has shifted its emphasis from using CRADAs with private partners to using procurement contracts with its contractors to develop new technologies important for its mission, according to laboratory officials. Figure 1 shows funding sources for CRADAs at NNSA laboratories and production facilities for fiscal years 1991 through 2000. As figure 1 and table 1 show, CRADA expenditures at NNSA’s laboratories and production facilities peaked in fiscal year 1995. In that year, DOE contributed $222 million, including $205 million in Technology Partnership Program funding, and private partners contributed $167 million in direct and in-kind support for CRADA activities. As DOE’s dedicated funding for technology partnerships declined, the proportion of private partners’ direct and in- kind contributions increased and has constituted more than half of all CRADA funding since fiscal year 1997. In the first two quarters of fiscal year 2001, DOE contributed $19 million and private partners contributed $61 million in direct and in-kind support for CRADA activities. (See table 6 in app. II for CRADA funding at individual NNSA facilities.) Table 2 shows the extent to which NNSA’s laboratories and production facilities used the other primary types of technology development partnerships. Generally, partnerships that relied on DOE funds have decreased, while those predominantly funded by businesses have grown. For example, technical assistance for small businesses, which was primarily funded by DOE’s Technology Partnership Program, dropped sharply—from about 1,700 actions that assisted small businesses in fiscal year 1995 to about 500 in fiscal year 2000. In contrast, work-for-other agreements, which are wholly funded by businesses, grew substantially from 209 agreements in fiscal year 1995 to 987 agreements in fiscal year 2000. Similarly, technology licensing agreements have greatly increased during this period. (See tables 7, 8, and 9 in app. II for each NNSA facility’s participation in each of these partnerships.) User facility agreements, which provide access to unique NNSA experimental research equipment and facilities, increased from 103 in fiscal year 1995 to 165 in fiscal year 1998 and then decreased to 96 agreements in fiscal year 2000. Businesses have provided more direct funding for work-for-other agreements than for any of the other types of partnerships. (See table 10 in app. II.) NNSA officials and laboratory managers identified various advantages and disadvantages of collaborative research under a CRADA. (See table 3.) An advantage of collaborative research under a CRADA is often accompanied by a disadvantage. For example, the ability to leverage research funding, staff, and equipment can be offset by concerns over a CRADA’s relevance to mission objectives and the risk inherent in sharing control over the scope of the research, project time frames, and intellectual property. Each of the NNSA laboratories we visited provided examples of successful CRADAs for both the laboratory and the CRADA partner(s). For example, in 1997, Sandia, Lawrence Livermore, and Lawrence Berkeley National Laboratory (a DOE energy science laboratory) entered into a CRADA with a consortium of microelectronics manufacturers to develop extreme ultraviolet lithography equipment for making next-generation computer chips with enhanced speed and memory. Consortium members are providing $250 million to develop this technology, which is also important for developing advanced computational capabilities that NNSA needs for its nuclear stockpile stewardship program. Technology transfer officials at the NNSA laboratories noted that CRADAs have enhanced their laboratories’ research by, for example, bringing together a wide range of scientific disciplines to address technical problems or providing NNSA scientists with access to advanced technology or manufacturing processes. Sandia officials generally preferred a CRADA to a work-for-others agreement because CRADA partners actively participate in the research. Sandia officials told us that the Technology Partnership Program had been an important catalyst for initiating CRADAs because it was the laboratories’ primary source of financial support in the early stages of the CRADA project before researchers could demonstrate that the CRADA would directly benefit a specific DOE program. However, some DOE managers have questioned the value of certain CRADAs—particularly some related to the Technology Transfer Initiative in the mid-1990s—stating that those CRADAs had used scarce resources for projects not closely tied to NNSA’s mission. Furthermore, negotiating and approving the terms of a CRADA could take more than 1 year to complete in the early 1990s. According to Sandia National Laboratories’ data, this time has been substantially reduced—in fiscal year 2000, CRADAs were processed from initiation to final approval in 86 days, on average, including an average of 4 days for DOE’s review and approval. Laboratory officials attributed this improved efficiency to the use of a standardized format for these agreements and the common practice of amending existing CRADAs to broaden the scope of work in lieu of negotiating a new agreement. In several cases, Sandia used blanket or “umbrella” CRADAs to combine a number of different projects with the same partner into a single agreement. NNSA laboratory managers identified three primary options for providing financial and management support for CRADAs: Continue to rely primarily on laboratory research managers to determine whether participating in a CRADA effectively supports their mission research. In addition to research funds, NNSA’s laboratories have used other DOE funds, including their “laboratory-directed research and development” funds and Accelerated Strategic Computing Initiative funds, to support certain CRADAs. DOE has contributed $19.4 million for active CRADAs at NNSA laboratories and production facilities in fiscal year 2001. Set aside a small portion of research funding specifically to provide initial support for mission-related CRADAs until they show sufficient potential benefits that program managers would be willing to provide financial support. Establish an advocate within NNSA responsible for facilitating funding for CRADAs. The laboratory managers noted that the advocate’s office could be combined with one of the two funding options. A senior official at NNSA headquarters stated that the two funding options were reasonable. However, the senior official preferred to assign responsibility for facilitating CRADAs to a senior office within NNSA without giving it responsibility for advocacy. We provided DOE with a draft of this report for its review and comment. NNSA’s Institutional and Joint Programs Division generally agreed with the draft report. NNSA also provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. To obtain trend data on technology development partnerships, we asked officials at NNSA and its laboratories to identify the primary types of technology partnerships that they have used with private entities. We then developed a data collection instrument to obtain participation and funding data from NNSA’s three nuclear weapons laboratories and three of its production facilities from fiscal year 1991 through the second quarter of fiscal year 2001. To help ensure consistency across locations, we worked with officials from these laboratories and facilities to establish uniform definitions and resolve any discrepancies. In addition, we (1) interviewed NNSA officials at DOE headquarters and DOE’s Albuquerque and Oakland Operations Offices and (2) visited Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories to obtain the views of administrators and scientists about their laboratories’ participation in and funding of technology development partnerships. To identify the advantages and disadvantages of CRADAs, we interviewed NNSA officials at DOE headquarters and obtained the views of laboratory administrators and scientists at Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories. We also interviewed executives of four businesses that participated in at least one CRADA with an NNSA laboratory to obtain their perspective about CRADAs. We conducted our review from January 2001 through May 2001 in accordance with generally accepted government auditing standards. We did not independently verify the data provided by NNSA’s laboratories and production facilities. We are sending copies of this report to the Secretary of Energy, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Sandra Davis, and Timothy Minelli. NNSA’s report entitled Report to Congress on Technology Partnerships With Non-federal Entities Within the National Nuclear Security Administration During Fiscal Year 2000 primarily examined CRADA activities at its laboratories and production facilities. The report stated that with the termination of the Technology Partnership Program’s dedicated funding, CRADA partnerships will obtain either financial support from individual DOE research programs—ensuring that the project is more clearly linked to DOE’s mission—or full funding from the private sector partner. NNSA stated that more than 200 of its 348 CRADAs supported its core missions in fiscal year 2000 and pointed to CRADA- developed technologies that benefited both NNSA and its private partners. For example, a CRADA used NNSA advanced laser technology to develop an improved laser shot peening process to make indentations that reduce fatigue in critical metal parts, such as jet engine fan blades and nuclear waste disposal containers. According to NNSA, the absence of dedicated funding could also result in fewer CRADAs that provide only secondary, or spinoff, benefits for its core mission. A separate NNSA report discussed technical assistance for small businesses, which also was cut back as the Technology Partnership Program was phased out. NNSA reported that CRADAs are advantageous because they can leverage its laboratories’ resources and bring to bear the expertise of several partners to address technical challenges. CRADAs also allow for more flexibility in the treatment of intellectual property than do other types of partnership agreements. NNSA noted that some laboratory personnel and private sector partners are skeptical about using CRADAs because they believe that negotiations take longer than necessary. Although the congressional mandate directed NNSA to recommend actions that would make CRADAs more effective in supporting its mission, NNSA made no recommendations. In response to section 3135(b) of the National Defense Authorization Act for Fiscal Year 1993, DOE’s Defense Programs established the Small Business Initiative to facilitate and encourage the transfer of technology to small businesses. Data were not readily available. 21 223 244 5.7 $4.0 $18.3 13.2 $0.7 $28.1 61.9 $139.6 $122.9 26.3 $2.5 $32.4 0 $0 $1.2 $19.7 4.71.6 10.81.2 88.8 $7.2 $80.4 $184.4 111.9 $255.7 $244.2 Primarily includes research funds. Some CRADAs at NNSA laboratories have used laboratory- directed research and development funds. Planned in-kind contribution by nonfederal partner(s). Data were not readily available. Data were not readily available. Data were not readily available. Data were not readily available on the number of continuing licenses. Data are not available. Not applicable. | Congress enacted the National Competitiveness Technology Transfer Act to encourage federal laboratories operated by contractors to enter into cooperative research and development agreements (CRADA) with businesses, universities, and other private partners. This act was designed to improve the United States' competitive position in the world economy by facilitating the transfer of technology from federal laboratories to U.S. businesses. This report reviews the National Nuclear Security Administration's (NNSA) (1) use of CRADAs and (2) views on the advantages and disadvantages of CRADAs. GAO found that NNSA has reduced its use of CRADAs while entering into more agreements fully funded by private partners. Dedicated funding for CRADAs was gradually phased out and program managers at the laboratories were supposed to rely on regular research funding to make up the shortfall. However, NNSA laboratory managers have stated that because the funding has not been replaced with research funds, their laboratories have either prematurely terminated many CRADAs or required the private partners to fully fund the work. According to NNSA officials, CRADAs offers both advantages and disadvantages. CRADAs have enabled laboratories to recruit and retain experienced staff and have improved U.S.' businesses position in the global economy. However, CRADAs also compete for limited funding and generally take longer to execute because of the complexity of the agreements. |
Although the Cold War has ended, the threat of foreign espionage to the nation still exists from a variety of countries, and recent revelations of intelligence activities against the United States involving Russia, China, and South Korea have raised concerns that such activities are on the increase. The Department of Energy (DOE) and its facilities, especially its nuclear weapons laboratories, are key targets of foreign intelligence interest. Not only do these laboratories conduct activities related to the design, construction, and maintenance of nuclear weapons—a long-standing target of foreign espionage—but they also conduct research into many areas of high technology, such as laser fusion, high-performance computers, and microelectronics. Their research is often done in collaboration with industry, and sometimes foreign countries, to develop new technologies for commercial applications. Accordingly, their work is of interest to other countries, and thousands of foreign nationals visit these laboratories each year to participate in such research. The high number of foreign visitors, as well as some recent investigative cases involving foreign nationals at DOE’s laboratories, have increased concerns that these laboratories are targets of foreign espionage efforts. DOE’s nuclear weapons laboratories—the Lawrence Livermore National Laboratory in California and the Los Alamos National Laboratory and Sandia National Laboratories in New Mexico—have been the cornerstones of the U.S. weapons program for over 40 years. In this regard, they are unique among DOE’s laboratories. Government-owned and contractor-operated, these three laboratories have been assigned specific missions for nuclear weapons development as well as other programmatic responsibilities. Over time, the laboratories have increasingly expanded their responsibilities in nondefense research areas. The Lawrence Livermore National Laboratory is operated by the University of California for DOE. Established in 1952, the laboratory occupies 1-square mile in Livermore, California. The laboratory’s major missions include nuclear weapons research and development to ensure the safety, security, and reliability of the U.S. nuclear weapons stockpile; other weapons and defense-related activities for DOE and the Department of Defense; inertial confinement fusion (a technology that has both energy and nuclear weapons testing applications); and nuclear nonproliferation. The Los Alamos National Laboratory, also operated by the University of California for DOE, was established in 1943 as part of the Manhattan Project that developed the first nuclear weapons. Located approximately 35 miles from Santa Fe, New Mexico, the laboratory covers an area of approximately 43 square miles. The laboratory conducts an array of classified and unclassified activities, including all phases of nuclear weapons research, design, and testing; other weapons-related research for DOE; and management of special nuclear materials, such as plutonium. Recently, Los Alamos was given responsibility for the production of certain weapons components. The Sandia National Laboratories are operated for DOE by the Lockheed Martin Corporation. Sandia, established in 1949, is located in Albuquerque, New Mexico, and works in conjunction with Livermore and Los Alamos to design and develop nuclear weapons. Sandia conducts research, development, and engineering on all facets of weapons design and development except the nuclear explosive components. Sandia also produces some of the nonnuclear components, such as neutron generators, that are needed for nuclear weapons. Although the Livermore, Los Alamos, and Sandia laboratories are involved in research and development activities related to nuclear weapons, in recent years many of their efforts have expanded beyond issues strictly related to defense or national security. The laboratories are now involved in such areas as high-performance computers, lasers, and microelectronics. Furthermore, they perform research in such diverse areas as biomedicine, environmental restoration, and global climate change. In addition, the laboratories are working with industry to develop new technologies and products for the commercial market. Such activities include work on advanced automobile propulsion systems, medical applications, and waste management. Furthermore, each laboratory conducts basic scientific research in areas of its own choosing—termed Laboratory Directed Research and Development. This research involves such subjects as astrophysics and space science, particle physics, materials science, and chemistry. Because the Livermore, Los Alamos, and Sandia laboratories are world-leading centers of research in many technologies and scientific disciplines, many foreign scientists are attracted to them and invited to come there to exchange information or participate in research activities. DOE’s policy supports an active program of unclassified visits to these laboratories for the benefit of its programs. In fact, DOE and the laboratories have cooperative activities with certain countries to exchange scientists and information and to collaborate on research in selected scientific areas. With the easing of global tensions since the breakup of the Soviet Union and the changing missions of the weapons laboratories, the number of unclassified foreign visits to the laboratories has increased significantly. The average annual number of visits by foreign nationals to the laboratories has increased over 50 percent from the late-1980s to the mid-1990s. Furthermore, this increase in foreign visitors is continuing. As shown in figure 1.4, the number of unclassified foreign visits to the laboratories has increased each of the last 3 years, to a level of about 7,000 visits in 1996. This represents a significant portion of the 20,000 or more unclassified foreign visits estimated by DOE to have occurred at all of its laboratories during 1996. Allowing foreign nationals to visit the weapons laboratories and participate in their unclassified activities provides valuable benefits to the laboratories and the country, such as using the visitors’ skills to increase the chances of making significant scientific advancements. However, because such visits are not without risk, DOE Order 1240.2b—Unclassified Visits and Assignments by Foreign Nationals, September 3, 1992—establishes responsibilities and policies and prescribes administrative procedures for controlling unclassified visits and assignments to DOE’s facilities. Until recently, the foreign visitor program was principally administered by the Office of Policy and International Affairs, but in March 1997 this responsibility was transferred to the Office of Resource Management in the Office of Nonproliferation and National Security. Other principal organizations involved in administering and controlling unclassified foreign visits include the Nuclear Transfer and Supplier Policy Division, Office of Arms Control and Nonproliferation; the Office of Safeguards and Security, Office of Security Affairs; the Counterintelligence Division, Office of Energy Intelligence; the appropriate headquarters program office that is sponsoring the visit; DOE field offices; and laboratory management. As defined by the order, visits are short-term stays of 30 days or less for the purposes of orientation, technical discussions, observation of projects or experiments, training, or discussion of collaboration on topics of mutual interest. Assignments are long-term stays of more than 30 days (within a 12-month period) to actively participate in the work of a facility or contribute to its projects. Assignments are limited to 2 years but may be extended. Assignees may include foreign nationals who are employees, as well as those who are guests or consultants. According to DOE’s estimates, over 25 percent of the foreign visitors to its weapons laboratories are assignees. DOE’s foreign visit and assignment order identifies several requirements for reviewing, approving, and documenting foreign nationals’ access to its nuclear weapons laboratories. Although the order, in general, allows most foreign nationals access with little oversight by DOE, the Department views some visits and assignments to be of potential concern. These include visits from countries DOE considers sensitive for reasons of national security, nuclear nonproliferation, regional instability, or terrorism support (see app. I for a list of these countries). Data from DOE and the laboratories show that almost 30 percent of the visitors to its weapons laboratories are from sensitive countries. DOE is also concerned about visits involving subjects which, although unclassified, are considered sensitive because they have the potential to enhance nuclear weapons capability, lead to nuclear proliferation, divulge militarily critical technologies, or reveal other advanced technologies (see app. II for a list of these subjects) as well as visits to areas located within the laboratories where special nuclear material and/or classified information and equipment are located. Certain requirements must be met if a foreign visit or assignment involves a sensitive country, a sensitive subject, or a security facility where classified work is conducted. According to the foreign visits and assignments order, all assignments and visits involving sensitive subjects or security facilities where classified work is conducted must be reviewed and approved by DOE. Furthermore, before an assignment involving a visitor from a sensitive country begins, a national security background check must be completed to determine if appropriate U.S. government agencies have derogatory information, such as an intelligence affiliation, about that individual. DOE also has security procedures that control the access of foreign visitors to the weapons laboratories. All foreign visitors—whether on a visit or an assignment—must wear an appropriate badge to obtain entry to various parts of a weapons laboratory. Furthermore, depending upon the facility involved, the days of the week and the hours during which the foreign national can actually be on site are restricted. Finally, guards and other security countermeasures are used to control access to those parts of the laboratories where classified work is conducted. Security forces and other countermeasures are also used to monitor and control access to the less protected, controlled areas known as property protection areas—which are not open to the general public and which may contain unclassified sensitive information—to ensure that this information is not compromised. As an added line of defense, DOE and its laboratories operate counterintelligence programs to identify and mitigate the risk that sensitive information could be divulged to foreign countries. Among other things, the counterintelligence personnel conduct awareness programs to keep employees aware of the risk of foreign intelligence-gathering activities, brief and debrief employees who host foreign visitors, conduct assessments of foreign visitor activity, and disseminate relevant information throughout the DOE community. However, they have no approval authority for foreign visitors. The laboratories’ counterintelligence programs do not conduct counterintelligence “operations,” such as surveillance activities. Situations of concern are referred to the Federal Bureau of Investigation (FBI), which performs counterintelligence operations or investigations as necessary. The risk that classified or sensitive information may be compromised through foreign espionage is real and has been long-standing. Espionage against the weapons laboratories occurred as long ago as the 1940s when the Manhattan Project was developing the nation’s first nuclear weapons. As documented in a 1996 Central Intelligence Agency (CIA) report that detailed recently declassified documents, key information on nuclear weapons was obtained from Los Alamos by the Soviet Union. In the 1980s and 1990s, there have been other espionage activities against DOE’s laboratories, but information on these incidents remains classified. DOE, laboratory, and other agency counterintelligence professionals briefed us on them, which included recent cases involving the possible theft or compromise of sensitive information in which foreign nationals at DOE’s laboratories played a prominent role. The large and increasing number of foreign nationals visiting DOE’s laboratories has raised concerns about the potential compromise of classified information or other sensitive or proprietary information at these facilities. Counterintelligence professionals point out that (1) the laboratories have desirable assets in the form of classified information and unclassified but sensitive information; (2) access by foreign nationals, even for a short time, can provide the opportunity to identify and target laboratory information; and (3) repeated and long-term contact between laboratory personnel and foreign nationals can create relationships that foreign countries can use to obtain information. They add that the threat has become more complex because not only is information on nuclear weapons desirable to some foreign countries, but information and technology of economic benefit is of great importance to all countries. Consequently, the laboratories face the risk of economic espionage by enemies and allies alike. Past unclassified work done by GAO and classified work by others have shown the risks of foreign visits and DOE’s problems in controlling foreign visitors’ presence at its laboratories. In 1988, we reported that major weaknesses existed in DOE’s foreign visitor program and, as a result, suspected foreign intelligence agents and individuals from facilities suspected of conducting nuclear weapons activities had obtained access to the laboratories without DOE’s prior knowledge. More recently, classified reports—in 1992 by an intelligence community interagency working group and in 1997 by the FBI—have pointed out basic problems with DOE’s counterintelligence efforts regarding the presence of foreign nationals at DOE’s laboratories. DOE itself is concerned about the number of foreign visitors to its facilities and the potential threat of espionage they pose and has obtained additional funding to help its counterintelligence programs respond to this potential threat. Counterintelligence funding for headquarters program direction and field activities in fiscal year 1996 totaled about $3.2 million. DOE was appropriated an additional $5 million in fiscal year 1997 to expand counterintelligence programs at its nuclear weapons laboratories and other high-risk facilities. A May 7, 1996, report of the House Committee on National Security directed GAO to determine how well DOE is controlling foreign visits to DOE’s three weapons laboratories and to determine whether these visits raise security or nuclear proliferation concerns. Since that time, we have issued to the Committee a Statement for the Record describing the number of foreign visitors to these laboratories and a report discussing the distribution of the fiscal year 1997 counterintelligence funds provided to DOE. This report completes our work on DOE’s controls over foreign visitors and, as agreed with Committee staff, addresses DOE’s (1) procedures for reviewing the backgrounds of foreign visitors and for controlling the dissemination of sensitive information to them, (2) security controls for limiting foreign visitors’ access to areas and information within its laboratories, and (3) counterintelligence programs for mitigating the potential threat posed by foreign visitors. To obtain an overall perspective on DOE’s foreign visitor procedures, security controls, and counterintelligence efforts, we obtained and reviewed pertinent DOE and laboratory orders, documents, and other materials. We also met with and interviewed DOE headquarters, field office, and contractor officials, including officials from DOE’s Offices of Defense Programs, Nonproliferation and National Security, and Policy and International Affairs in Washington, D.C., and in Germantown, Maryland, as well as officials at DOE’s field locations in Albuquerque and Los Alamos, New Mexico, and in Livermore, California. We also met with contractor officials at the Lawrence Livermore National Laboratory in Livermore, California; the Los Alamos National Laboratory in Los Alamos, New Mexico; and the Sandia National Laboratories in Albuquerque, New Mexico. Furthermore, we met with officials from the FBI to obtain their views on the risk of, and control over, foreign visitors to DOE’s laboratories. In reviewing procedures on background checks for foreign visitors, we reviewed data on visits that occurred between January 1994 and December 1996. We examined records on visits and background checks contained in (1) DOE’s centralized computer database on foreign visitors, (2) the laboratories’ badging office and local foreign visitor databases, and (3) DOE’s centralized counterintelligence database. We did not independently verify the accuracy of the information in these databases; however, we did obtain additional verification of visit information as necessary to complete our review. In particular, our analysis focused on the adequacy of DOE’s controls related to high-risk visitors (i.e., visitors from sensitive countries who potentially could have derogatory national security information on file). In this regard, we tracked information on such visitors by examining DOE’s records on background checks and by independently obtaining some background checks from the FBI. To examine the process used for identifying sensitive subjects and controlling the dissemination of such information to foreign visitors, we obtained and analyzed pertinent guidance on sensitive subjects and discussed with DOE and contractor officials (including some who had hosted foreign visitors) the methods by which visits involving sensitive subjects are identified. We examined records on several hundred visits that occurred from January 1994 through December 1996. We judgmentally selected for further analysis over 150 visits that were not identified as involving sensitive subjects and compared the visits’ purpose and/or subject with those identified on DOE’s sensitive subject list. We discussed these visits with DOE officials in its Nuclear Transfer and Supplier Policy Division, which is responsible for reviewing visits that involve sensitive subjects, to obtain their perspectives on the accuracy of the identification of sensitive subjects. Additionally, we followed up with researchers and managers at these laboratories who frequently host foreign visitors concerning whether individual research projects that involved foreign nationals involved sensitive subjects. To assess the security controls associated with foreign visitors’ access to certain areas and information within DOE’s laboratories, we obtained and examined security procedures, plans, surveys, and statements of threat. Our work included a review of laboratory security infractions, violations, and occurrences, as well as laboratory counterintelligence contact/incident reports. Additionally, we obtained unclassified program and building security assessments that identified problems and vulnerabilities. While touring laboratory facilities with security personnel, we observed the security controls in place for both classified and unclassified sensitive research. To review the counterintelligence programs, we interviewed DOE, laboratory, and FBI counterintelligence officials and obtained pertinent documentation regarding the potential threat posed by foreign visitors to the weapons laboratories and DOE’s activities to counter this threat. In particular, we attended a classified counterintelligence briefing that was held for staff of DOE’s Albuquerque Operations Office, which discussed the foreign visitor threat. We also examined the laboratories’ counterintelligence contact/incident reports and observed the capabilities of DOE’s centralized counterintelligence database. In addition, we obtained and reviewed assessments of DOE’s counterintelligence programs that had been conducted by other organizations in the U.S. intelligence community. We encountered two limitations in our attempts to examine DOE’s controls over foreign visitors to its laboratories. First, our request to the CIA for access to data on the backgrounds of foreign visitors was denied on grounds of the sensitivity of the data. As a result, we were unable to review background information from the CIA that was on file at DOE or to independently obtain background data from the CIA. Second, we requested from the FBI specific information on possible espionage or other illegal activities at the laboratories. However, FBI officials told us that disclosure of such information is contrary to FBI policy; consequently, the requested information was not provided to us. We provided a draft of this report to DOE for its review and comment. DOE’s comments and our response are included at the end of chapter 5; the full text of DOE’s comments are included in appendix IV. Our work was conducted from July 1996 through September 1997 in accordance with generally accepted government auditing standards. Major contributors to this report are listed in appendix V. Although foreign visitors provide many benefits to DOE’s programs, every one of their visits to a nuclear weapons laboratory poses a risk that sensitive information might be inadvertently or intentionally compromised. To minimize this risk, DOE’s foreign visitor order specifies several procedures that should be conducted before foreign nationals are allowed access to its laboratories. DOE has not effectively implemented two of the key procedures at the three laboratories we reviewed. More specifically: Few national security background checks are being performed on visitors from sensitive countries. As a result, foreign nationals suspected by the U.S. counterintelligence community of having foreign intelligence affiliations have been permitted access to the laboratories without the advance knowledge of appropriate officials. Because of unclear criteria regarding what constitutes sensitive subjects and the lack of an independent review process to examine the subjects to be discussed during visits, foreign visits involving potentially sensitive subjects—such as inertial confinement fusion, hydrodynamics codes, and the detection of nuclear weapons tests—are occurring without DOE’s knowledge. Without adequate knowledge about the foreign nationals who plan to visit its laboratories and the subjects to be discussed during those visits, DOE cannot take appropriate action to ensure that their visits are properly controlled. This heightens the risk that such visitors may obtain, either directly through active intelligence efforts or indirectly through involvement in laboratory activities, information whose disclosure to certain countries would be detrimental to the United States. Background checks can provide DOE and its weapons laboratories advance warning of possible problems or concerns with a foreign visitor, and DOE’s foreign visitor order contains requirements for obtaining background checks for visitors from sensitive countries. However, DOE granted two laboratories—Los Alamos and Sandia—a partial exception from complying with its requirements. As a result, few background checks have been initiated for foreign visitors to those two facilities. As part of its process to approve foreign visitors, DOE requires that national security background checks (termed indices checks by DOE) be conducted on certain foreign visitors to its laboratories. Under DOE’s order, background checks are required for all sensitive-country assignees (those whose visits will exceed 30 days). Additionally, background checks must be proposed by the laboratories to DOE’s Counterintelligence Division for short-term visitors from sensitive countries, but the division has the discretion to determine whether the background check should be done. For example, the Counterintelligence Division may choose to request background checks on sensitive-country visitors who will be entering security areas or discussing sensitive subjects. The checks are obtained from government intelligence and investigative agencies, such as the CIA and the FBI. At DOE’s request, these agencies review their files and report to DOE whether intelligence information of a derogatory nature exists about a particular visitor (e.g., that the visitor is suspected of having ties to a foreign intelligence service or terrorist group). DOE’s order also requires that some background checks—those considered necessary to approve a visit or assignment—be completed before the visit or assignment begins. Many other checks done on visitors need not be completed before the visit begins—these are checks considered needed for counterintelligence research purposes only. Although DOE uses the results of these background checks to approve proposed visits and to help mitigate any risks related to them, the existence of derogatory information about a foreign visitor does not preclude a visit from occurring. According to DOE officials, if a background check reveals derogatory information about a foreign visitor, the visit is rarely denied. Instead, DOE allows the visit to occur but, depending on the results of the check and other factors, may increase the stringency of escort requirements or may restrict the length of the visit, the buildings to be accessed, or the subjects to be discussed. Thus, the background check serves as means to forewarn DOE and laboratory officials of possible national security concerns so they may devise appropriate countermeasures where needed. Few background checks are performed for visitors to DOE’s Los Alamos and Sandia laboratories. In August 1994, these laboratories implemented a partial exception from the foreign visitor order that was granted by DOE. Under the terms of this exception, the two laboratories are required to request background checks only on those foreign visitors planning to enter a security area at the laboratory or to discuss sensitive subjects. According to DOE and laboratory officials, the partial exception for Los Alamos and Sandia was granted because of the high volume of foreign nationals desiring to visit these weapons laboratories, which contributed to processing backlogs, and the costs associated with processing paperwork for foreign visitors. Laboratory officials said the processing backlogs caused delays that resulted in some visits having to be canceled because of uncompleted background checks. The partial exception has limited the number of requests for background checks on visitors to Los Alamos and Sandia. As a result, DOE obtains relatively few background checks on visitors to those laboratories, particularly in comparison to Livermore, which did not request an exception from the order’s requirements. Our review of DOE’s records of foreign visitors showed that, during the 3-year period from 1994 through 1996, background checks were obtained on only 5 percent of the visitors from sensitive countries to Los Alamos and Sandia. In contrast, Livermore requested checks on many more names during that timeframe, and background checks were obtained on 44 percent of the visitors from sensitive countries to this laboratory. Table 2.1 compares the number of background checks obtained on sensitive-country visitors for the three laboratories. Data on foreign visitors from individual sensitive countries also showed significant differences among the laboratories. For example, 46 percent of the Russian visitors to Livermore were checked during that 3-year period, compared to 10 and 7 percent, respectively, for Los Alamos and Sandia. Furthermore, 39 percent of the Chinese visitors to Livermore were checked, compared to 2 and 1 percent, respectively, for Los Alamos and Sandia. (See app. III for numbers and percentages for all sensitive countries.) By checking the backgrounds of so few visitors from sensitive countries, particularly to Los Alamos and Sandia, DOE limits the collection of basic counterintelligence data and may be unknowingly allowing significant numbers of visitors with questionable backgrounds into its weapons laboratories. According to FBI counterintelligence officials, the low percentage of background checks conducted on Russian and Chinese visitors to Los Alamos and Sandia does not constitute effective use of the background check process. Statistics on the results of background checks DOE did request support this. Of all the background checks DOE obtained on visitors from sensitive countries to the weapons laboratories during the 1994 through 1996 timeframe, about 4 percent of the checks that DOE received indicated the existence of derogatory information. Moreover, we noted during our review that people with suspected foreign intelligence connections were let into the laboratories without background checks. We were able to document 13 instances where persons with suspected foreign intelligence connections were allowed access without background checks—8 visitors went to Los Alamos and 5 went to Sandia—during the 1994 through 1996 period. Available records also indicated that 8 other persons with suspected connections to foreign intelligence services were approved for access to Sandia during the period; however, DOE and Sandia lacked adequate records to confirm whether the persons actually accessed the facility. Although we could not confirm that any of these visits compromised U.S. security, at a minimum, the lack of a background check did not provide DOE the opportunity to implement countermeasures to mitigate the potential risk posed by these visits. Also, all of these instances occurred at the two weapons laboratories that had been granted a partial exception to DOE’s foreign visitor order. DOE’s requirements for national security background checks represent a continuing problem that we previously identified in a 1988 GAO report and about which elements of the U.S. intelligence community have also expressed concern. In discussing this problem, DOE and laboratory counterintelligence officials said that they recognize that the number of background checks obtained on foreign visitors has been limited, especially at Los Alamos and Sandia, and that these checks should be routinely requested for visitors from sensitive countries. They added that although data from a background check—even derogatory data—is rarely used to deny a visitor access to a laboratory, obtaining such information is beneficial in identifying individuals known to be a risk. DOE headquarters counterintelligence officials said their long-term goal is to obtain background checks on all foreign nationals from sensitive countries that seek to visit any of these three laboratories. In the interim, according to a Sandia counterintelligence official, that laboratory is now reporting data on all sensitive country visitors to DOE headquarters for potential background checks. DOE has little assurance that all visits during which sensitive, but unclassified, subjects will be discussed are identified and brought to the attention of DOE officials. According to DOE’s order, DOE officials are to review and approve visits by foreign nationals that involve sensitive subjects. But DOE and laboratory personnel alike are unclear about what constitutes a sensitive subject, and little or no independent review takes place to assess subjects within the context of the planned visit (e.g., taking into account the purpose of the visit, the particular aspects of the subject to be discussed, and the foreign country and individuals involved). As a result, sensitive information could be discussed or otherwise disclosed to foreign nationals without DOE’s knowledge and approval. To minimize the risk of inappropriate subjects being discussed with foreign nationals, DOE’s order requires that its laboratories identify any visit involving a sensitive subject for review and approval by DOE. The order defines sensitive subjects as unclassified subjects involving information, activities, or technologies relevant to national security. To facilitate their identification, the order contains a list of sensitive subjects, including nuclear weapons production and supporting technologies, nuclear explosion detection, inertial confinement fusion, production and handling of plutonium, and fuel fabrication. Additionally, the order contains three criteria for identifying other subjects that may be sensitive. Subjects are considered sensitive if they relate to technologies under export control, “dual-use” technologies that have both peaceful and military applications, or rapidly advancing technologies that may become classified or placed under export control. Subjects in these categories include computer systems, component development, and software specifically designed for military applications; extremely high-energy, high-brightness lasers and particle beams; and high energy density batteries and fuel cells. The responsibility for reviewing visits involving sensitive subjects rests with DOE’s Nuclear Transfer and Supplier Policy Division in the Office of Arms Control and Nonproliferation. This division also reviews private-sector exports of information and technology that could be useful to a foreign nuclear or nuclear weapons-related program. According to division officials, while the discussion of a sensitive subject with a foreign national is not necessarily prohibited, DOE needs to be aware of any such discussions to ensure their consistency with U.S. policy regarding the transfer of that information to the foreign national’s home country. The officials added that the need for DOE’s review and approval of the discussion of a sensitive subject is not dependent on the visitor’s home country—the discussion of any sensitive subject with a foreign visitor from even a nonsensitive country still requires DOE’s review and approval. DOE’s three weapons laboratories have not adequately identified visits involving sensitive subjects. Between January 1994 and July 1996, they identified a total of 72 visits involving sensitive subjects; the majority of these visits were related to areas specified as sensitive in DOE’s order. For example, 5 Russian citizens visited Los Alamos in 1994 for a 3-day visit involving nuclear materials control, accounting, physical protection, security, export control, and critical assembly facilities; 13 Russian nationals visited Los Alamos in 1995 for a 1-day workshop on plutonium stabilization, storage, and disposition; and 30 French nationals visited Livermore in 1995 for 1- to 2-year assignments to work on inertial confinement fusion. However, our review of records on 167 other visits found numerous cases that pertained to subjects that were either specified as sensitive in DOE’s order or were potentially sensitive but were not identified as such by the laboratories. For example: Sixteen visits and assignments to Livermore involved inertial confinement fusion, a technology specifically listed as sensitive in DOE’s order. These visits included foreign visitors who were participating in a formal bilateral cooperative effort, including the transfer of proprietary data, between the United States and France on subjects related to inertial confinement fusion. On other occasions, Livermore has identified this type of visit as involving a sensitive subject. A Canadian citizen was on an assignment to Livermore to discuss equation of state measurements using laser-generated shock-waves—work that was acknowledged to be important to the inertial confinement fusion program, a sensitive subject area. An Indian citizen from a defense-related facility in India was on an assignment to Los Alamos that involved the structure of beryllium compounds. Beryllium metal is used in nuclear weapons. An Indian citizen was on assignment to Los Alamos for work related to pattern recognition/anomaly detection algorithms. This work was acknowledged to be dual use in nature, with applications related to national security, such as nonproliferation and satellite image processing, as well as to nondefense projects. A Russian visit to Los Alamos involved collaboration on processes related to detecting unsanctioned nuclear weapons tests. Nuclear explosion detection is specifically identified as a sensitive subject in DOE’s order. A citizen of the United Kingdom was assigned to Livermore for 3-dimensional hydrodynamic simulations for implosions. Hydrodynamics and 3-dimensional calculations are important to simulating nuclear weapons tests, particularly in light of the ban on nuclear testing. We reviewed copies of the documentation on these visits and discussed them with officials in DOE’s Nuclear Transfer and Supplier Policy Division to obtain their perspectives on whether they may have involved sensitive subjects. They said that it was not possible to fully ascertain whether these visits did or did not involve sensitive subjects; however, they pointed out that many of them appeared to involve subjects that are specifically identified as sensitive subjects in DOE’s order and that others appeared to have some weapons or dual-use applications. The export control officials said that, according to the stated purpose of the visits described in their documentation, they involved subjects that should have been sent for their review. DOE’s weapons laboratories have had problems identifying visits involving sensitive subjects largely for two reasons—confusion over how to apply the sensitive subject criteria and the lack of an independent technical review of proposed foreign visits to identify those involving sensitive subjects. According to laboratory program managers and hosts of foreign visitors, DOE’s criteria for identifying sensitive subjects are very broad and do not clearly define which activities are covered. The laboratory managers added that the current list of sensitive subjects is outdated, incomplete, and does not establish reasonable parameters within which they could reasonably gauge a subject’s sensitivity. As an example of the difficulty in applying the criteria, they noted that while inertial confinement fusion is listed as a sensitive subject because of its relationship to nuclear weapons testing, most aspects of this technology are unclassified and widely researched throughout the world and that the laboratory’s unclassified inertial confinement fusion work is published and freely disseminated. They added that without more specific criteria from DOE, they generally view activities in inertial confinement fusion and other areas that are unclassified, already published, or will ultimately be published, to be nonsensitive. DOE officials with the Nuclear Transfer and Supplier Policy Division acknowledged that although there are difficulties in identifying sensitive subjects, the laboratories are interpreting the order’s criteria too narrowly. They said that the sensitivity of a subject may at times be subjective and it often depends on the country to which the information will be divulged, the state of that country’s technology and research efforts, and other information on that country’s needs and intentions regarding the use of the technology. However, they added that hosts are not in a position to know that information and/or whether it is consistent with U.S. government policy to provide that information to the country in question. The list of sensitive subjects serves as a guideline to identify such visits for scrutiny by DOE officials who possess the necessary expertise to determine whether it is appropriate to discuss a particular subject with a foreign visitor from a specific country. A second problem hindering the identification of visits involving sensitive subjects is the lack of an independent review of proposed visits by individuals with technical expertise to help ensure sensitive subjects are properly identified. During the period of our review, DOE and the weapons laboratories relied upon the host—the laboratory employee sponsoring the foreign visitor—to accurately identify sensitive subject visits. Such visits were approved by the appropriate laboratory division management and by officials in the foreign visits and assignments office at each laboratory. However, little or no independent review of the subject of those visits had been conducted to ensure that sensitive subjects were not involved. At Sandia and Los Alamos, officials in the foreign visits and assignments office review requests for foreign visitor access; however, those individuals do not have a technical background or expertise to judge if a sensitive subject is involved. At Livermore, visit requests are reviewed in the office of the laboratory director, as well as at the DOE operations office; however, this laboratory’s review has at times been delegated to individuals from the foreign visits and assignments office. Laboratory personnel from the foreign visits and assignments offices told us that they are not fully knowledgeable on activities that could be sensitive and that they generally rely on the host to determine whether a visit would involve a sensitive subject. DOE and the weapons laboratories have recognized problems with identifying visits involving sensitive subjects and have begun actions to address them. In the fall of 1996, DOE initiated a multiissue effort to revise its foreign visit and assignment order. This effort will include examining the controls over foreign visits involving sensitive subjects and developing a better process and/or criteria by which to identify them. According to officials in DOE’s Counterintelligence Division, which is involved in the effort, the revised order is expected to be issued by the end of 1997. However, because revision of the criteria for identifying sensitive subjects has not yet gotten underway and does not have a timetable for completion, they do not know if changes to clarify DOE’s criteria for identifying visits involving sensitive subjects will be included in the revised order. During our review, two of the three laboratories established interim local processes to examine requests for foreign visitors to better ensure that their visits do not involve discussions of sensitive subjects. In August 1996, Livermore began requiring that all visits involving foreign nationals from sensitive countries be reviewed by an official in its Arms Control and Treaty Verification Program who has had experience with nuclear weapons and associated technologies. These reviews are specifically to assess the technology involved and identify those requests that involve sensitive subjects. According to the Livermore official conducting these reviews, although most visits have not involved sensitive subjects, he has identified some visits of concern, for which actions were taken to help ensure that sensitive subjects would not be involved. In December 1996 Sandia began requiring that all requests for foreign visitors be reviewed by a Sandia official involved in export control to better ensure visits involving sensitive subjects are adequately identified. Effective security controls can greatly mitigate the risk inherent with the presence of foreign visitors at DOE’s weapons laboratories. However, the security controls that exist in the laboratories’ controlled areas—the areas most often visited by foreign nationals—may not provide effective protection. The controlled areas contain unclassified, but sensitive information, and although security measures are used to control access, these measures are less stringent than those used in classified areas and their implementation varies among the laboratories. Security problems and vulnerabilities involving foreign nationals show that classified and/or sensitive information has been, or potentially could be, compromised by foreign nationals in the controlled areas. Nevertheless, DOE has not fully assessed the effectiveness of its security measures to protect sensitive information in controlled areas. To protect information from unauthorized disclosure or compromise, DOE and its laboratories use various levels of security that permit access for authorized individuals to certain areas. Although some foreign visitors are allowed access to the more restrictive security areas where classified work is conducted, most foreign visits occur in designated controlled areas—often termed property protection areas—which may contain unclassified sensitive information. A lower level of security is provided in these areas, and the controls used vary among the laboratories. DOE and the laboratories use a multilevel, graded security approach to limit access and protect information at their facilities. Open areas, which include locations on laboratory property to which the general public is allowed access, receive a low level of protection. Open areas can include cafeterias, visitors centers, and museums. Controlled areas, which receive a higher level of protection, can include small areas, such as an individual building, as well as larger areas, such as building complexes. Access to these areas is controlled because of the presence of valuable property or unclassified sensitive information, but no classified work is conducted in these locations. Unclassified sensitive information includes information that has been designated Official Use Only, proprietary, export controlled, Privacy Act, and Unclassified Controlled Nuclear Information. An even higher level of protection and stricter access limitations are maintained for security areas containing classified information and technologies or in which nuclear weapons or other classified research is conducted. These areas are closely monitored and patrolled, and controls traditionally include guns, guards, and gates. Specific security plans must be developed and approved before any foreign visitor is allowed access to these areas and the visitor must be escorted at all times. Most foreign visitors to the weapons laboratories are granted access to the controlled areas. Laboratory records show that on average only about 5 to 10 percent of all foreign visitors are permitted into security areas where classified work is performed, and according to DOE and laboratory officials, such access is usually for a short period of time. The remaining visitors are either allowed into the controlled areas or meet with laboratory employees in open areas. DOE and laboratory officials were not able to identify the percentage of those visitors that went to controlled areas, but stated that most were allowed into these locations. Because valuable property and information that is unclassified, but sensitive, is located in controlled areas, DOE requires the laboratories to protect these areas through the use of a variety of security controls. Controls used to reduce the risks posed by foreign nationals in controlled areas include the following: A distinctive identification badge must be worn by foreign visitors at all times. Access is controlled by automated devices or by receptionist staff and manual visitor logs. Automated devices include equipment that reads encoded access cards and/or requires passwords. Standard or “generic” security plans are drafted for controlling foreign visits in the area. A host is designated, who is a laboratory employee responsible for the activities of the foreign national while at the laboratory. A visitor or assignee is not permitted to be a host. Random searches are conducted on vehicles or hand-carried items entering or leaving the area. Among the three laboratories, however, the security controls associated with foreign visitors in controlled areas are not consistently applied. In particular, each of the laboratories has different requirements for allowing foreign visitors after-hours access. At Livermore, foreign visitors are not allowed unescorted after-hours access to controlled areas without the specific written approval of laboratory security officials and the concurrence of the local DOE field office. According to Livermore security officials, while they have granted such access for some foreign visitors, they do not approve unescorted after-hours access for visitors from sensitive countries. At both Los Alamos and Sandia, unescorted after-hours access to controlled areas has been permitted. These laboratories have required the host to monitor the foreign visitor—that is, be aware of the foreign visitor’s location and activities—but not necessarily be physically present. Recently, Sandia revised its after-hours access policy. In November 1996, Sandia no longer allowed foreign nationals to have unescorted after-hours access to controlled areas without the approval of its counterintelligence office. According to Sandia and DOE officials, this change was made because of the potential for security problems that could result from unescorted access. Los Alamos, however, continues to allow unescorted after-hours access to preserve what one official described as an open “campus atmosphere” for researchers at its facilities. Laboratory policies also vary regarding random searches in controlled areas and the appearance of foreign visitor identification badges. While all of the laboratories officially permit random searches in controlled areas, at one of the laboratories such searches are discouraged during normal work hours. Additionally, the distinctive color and wording of badges for foreign visitors differ among the laboratories. For example, at Livermore those badges are white (for visits) or red (for assignments), at Los Alamos badges for foreign visitors are red, and at Sandia those badges are gray. Furthermore, unlike the badges at the other laboratories, Sandia’s badges contain no wording pertaining to the visitors’ countries of citizenship or indicating that the wearers are not U.S. citizens. Finally, neither Los Alamos nor Sandia has developed security plans—even generic ones—for foreign nationals who will be in controlled areas. The DOE order governing unclassified foreign visits and assignments identifies security plans as the basic means by which vital information is protected and requires they be developed. However, DOE and laboratory officials told us that because of the exception granted by DOE to these two laboratories—which also streamlined requirements for background checks and visit approvals—security plans are no longer required for visits to controlled areas. Livermore has not sought such an exception and requires a generic security plan for all foreign visitors to its controlled areas. Available data from the weapons laboratories showed that the sensitive information in controlled areas has been vulnerable to compromise. Between 1991 and 1997, laboratory security assessments and records identified vulnerabilities and problems involving foreign visitors, and in buildings and programs to which those visitors had access. Records of vulnerabilities and problems included improper releases of information and failures to follow security controls and requirements. Assessments and records from all three laboratories indicated vulnerabilities and problems involving the improper release of unclassified sensitive information and classified information in unclassified settings. In most of these cases, the information was actually or potentially available to foreign visitors. Whether or not a laboratory employee personally hosts a foreign visitor, all laboratory employees must adequately protect classified or unclassified sensitive information and not disclose it unless authorized. However, examples of improper releases included the following: Unclassified sensitive documents and materials had been improperly discarded in trash, recycling bins, or hallways. At one of the laboratories, six boxes of papers marked “sensitive material” in red letters on the outside were left in an open hallway in an area accessible to foreign visitors. At one of the laboratories, a division’s open-access newsletter, which was accessible to the foreign visitors it was hosting, provided information on corporate and laboratory research agreements, the development of certain computer codes, and DOE’s nuclear program. Classified information had been inadvertently divulged by laboratory employees during unclassified workshops or conferences to foreign visitors, some of whom were from sensitive countries. A departmental newsletter containing classified information was sent to 24 uncleared individuals, 11 of whom were foreign visitors. Some of the foreign visitors were from a sensitive country. Vulnerabilities and problems associated with employees’ failures to follow security requirements and controls were also identified in the laboratories’ records. The following are several examples: In one case, a laboratory employee in a building to which foreign visitors had access failed to question the unauthorized removal, by members of a security assessment team during a test exercise, of a complete computer system from a controlled area. The employee did not challenge the team’s activities despite the fact that its members were not wearing identification badges and were openly discussing plans to remove additional machines and equipment in an effort to appear suspicious. On 10 separate occasions, a laboratory employee hosted visitors from sensitive countries without following visit approval requirements or gaining appropriate authorizations prior to those visits. Another host at the same laboratory met foreign visitors off-site without proper approval after a laboratory official advised him that he could “receive a reprimand, but it would not jeopardize his security clearance.” In another case, a host, when confronted with a requirement to limit after-hours laboratory access of certain sensitive country assignees assisting him with his research, moved the visitors and his research to an off-laboratory location. On several occasions, there were miscellaneous failures to follow security procedures, including computers left on and unattended without password protection, improper escorting of foreign visitors who required such oversight, and unauthorized back door entry to controlled areas where many foreign visitors had access. DOE and laboratory security officials told us that they are concerned about, but not surprised by, vulnerabilities and problems in controlled areas. The openness under which unclassified research programs operate poses a dilemma in an age of economic competitiveness. DOE’s own security awareness literature states that although many employees realize the importance of protecting classified information, few are aware of the significance of unclassified sensitive and proprietary information. Furthermore, DOE and laboratory security officials told us that the security consciousness of employees working in controlled areas is more relaxed than in security areas where classified research is conducted. While some security officials said that they would like to see a stronger emphasis on security in controlled areas at the laboratories, others said that some technical and research staff do not place a high priority on security and actually see it as an impediment to their work. Neither the laboratories nor DOE has fully assessed the controls over unclassified, but sensitive information. At the laboratories, operations security (OPSEC) assessments are performed to identify vulnerabilities. However, only at Sandia has there been an assessment that specifically focused on controls over unclassified sensitive information in controlled areas to which foreign visitors have access. Furthermore, while DOE has assessed overall laboratory security operations on a regular basis, its assessments have not addressed the protection of unclassified sensitive information in controlled areas. DOE requires the use of OPSEC techniques and measures to help protect information and activities related to national security and government interests. The purpose of OPSEC is to disrupt or defeat the ability of foreign intelligence or other adversaries to acquire sensitive or classified information and to prevent the unauthorized disclosure of such information. Each of the laboratories has an OPSEC program and uses OPSEC assessments to identify security vulnerabilities associated with specific laboratory facilities or programs. To identify vulnerabilities, OPSEC personnel assess various practices, including physical security and access controls; visitor log and escort procedures; availability of sensitive information on bulletin boards, in meeting rooms, and in offices; document disposal and destruction methods; and computer access protections. While all three laboratories have performed OPSEC assessments, only Sandia has conducted an assessment specifically focused on controls over unclassified sensitive information in controlled areas to which foreign visitors have access. Sandia’s assessment was completed in March 1997, and although it found no indication that the laboratory had allowed foreign visitors to compromise proprietary or sensitive information, it concluded that Sandia needed to define a policy concerning areas and information sources to which foreign nationals should have access. Subsequently, Sandia changed the process for controlling foreign visitors’ access to, and work in, controlled areas. Foreign nationals visiting Sandia for more than 30 days now work in “export controlled zones”—locations within controlled areas where they can work with their respective project teams but are restricted from unauthorized access to research in the surrounding area. OPSEC assessments at Livermore or Los Alamos have not yet examined foreign visitors’ access to sensitive information. Livermore’s past OPSEC assessments have dealt with visitors in general, but have not specifically addressed foreign visitors and the potential for them to access sensitive information. Livermore’s OPSEC manager said that the laboratory plans to conduct two such assessments before the end of 1997. Similarly, Los Alamos’ OPSEC assessments have included some issues related to foreign visitors, such as their access to open and secure areas, but they have not focused on assessing whether foreign visitors could obtain sensitive information. In addition to the laboratories’ OPSEC assessments, DOE does broader periodic surveys of their security operations, including visits and assignments involving foreign nationals that are intended to be comprehensive assessments of each laboratory’s security operations. Generally, DOE’s surveys are performed every year or two, depending on the findings of the previous survey for a specific laboratory. The most recent surveys at Los Alamos and Sandia were completed in March and April of 1997, respectively. The most recent survey of Livermore’s program was completed in August 1996. In these surveys, each of the laboratory’s foreign visits and assignments program was rated satisfactory. However, the primary focus of these surveys was on the program’s organization, management, and operations, and not on information protection. As a part of DOE’s past surveys, each laboratory’s program was evaluated by conducting interviews, reviewing documentation, and testing performance. The surveys did not address protection of unclassified sensitive information in controlled areas—in general or in association with foreign visitors. For example, while several sections in the survey report on security at Livermore addressed the effectiveness of its controls over classified information, none addressed the adequacy of protections for unclassified sensitive information. DOE’s headquarters and field counterintelligence programs are an important part of its defense against foreign espionage efforts at the nuclear weapons laboratories. Foreign visitors to these laboratories have open, often long-term, access to personnel with detailed knowledge and expertise in classified and/or sensitive matters. Although this situation is viewed by counterintelligence experts as an ideal opportunity for foreign intelligence-gathering efforts, DOE has not comprehensively assessed the threat of foreign intelligence against the laboratories. A thorough assessment that identifies countries of concern, the technologies and the information these countries are seeking, and the programs that are likely to be targets of foreign intelligence, is important for DOE and its laboratories to understand and reduce the dangers posed by foreign visitors. Furthermore, DOE has not developed any meaningful programmatic measures by which to evaluate the effectiveness of the laboratories’ counterintelligence programs nor has it periodically evaluated them. Recently, DOE initiated several actions to strengthen the counterintelligence programs, both at headquarters and at the laboratories. The mission of DOE’s counterintelligence programs is to implement effective defensive efforts departmentwide to deter and neutralize foreign government or industrial intelligence activities in the United States directed at or involving DOE. DOE’s headquarters Counterintelligence Division, within the Office of Energy Intelligence, has overall responsibility for this mission and counterintelligence activities throughout DOE. Staffed with seven DOE employees and seven contract employees, DOE’s Counterintelligence Division is responsible for such activities as conducting various threat assessments and identifying foreign intelligence activities directed against DOE as well as overseeing each laboratory’s counterintelligence program. DOE’s threat assessments can vary from a comprehensive threat assessment DOE-wide to a narrowly focused threat assessment that examines a specific issue, such as a particular foreign country’s interest in DOE’s assets. DOE’s Counterintelligence Division is responsible for implementing counterintelligence policies and procedures throughout DOE. This responsibility includes (1) developing and implementing methods, techniques, standards, and procedures for DOE’s counterintelligence activities; (2) establishing a briefing and debriefing program for foreign travel and contacts; and (3) monitoring visits and assignments of foreign visitors to all of DOE’s facilities. Each laboratory has its own counterintelligence program, which is conducted in compliance with DOE’s requirements, and laboratory counterintelligence officers report directly to laboratory management. The laboratories’ programs emphasize employee briefings and debriefings as well increasing employees’ awareness and knowledge about counterintelligence. Briefings and debriefings of employees take place prior to and/or after an event (e.g., when hosting a foreign visitor or when taking a foreign trip). In briefings, counterintelligence officers provide information to employees on such concerns as the types of subjects to avoid discussing with foreign visitors. In debriefings, these officers obtain information from the employees that can help DOE determine if there are indications that intelligence services are trying to target that laboratory or its staff. Additionally, counterintelligence activities at each laboratory include initial investigations of possible foreign intelligence efforts to determine if referral to appropriate federal agencies would be warranted, liaison with federal agencies, and gathering and recording such basic counterintelligence information as foreign visitors’ activities at a laboratory and persons contacted. DOE officials estimate that operating the headquarters counterintelligence program costs about $1.8 million annually. For fiscal year 1996, DOE’s three weapons laboratories had a total counterintelligence program funding of $905,000 and 9.4 counterintelligence staff years—funding of $552,000 and 5.5 staff years at Livermore, funding of $100,000 and 1.1 staff years at Los Alamos, and funding of $253,000 and 2.8 staff years at Sandia. To understand the dangers posed by foreign visitors, DOE needs to perform a comprehensive assessment of the threat to its laboratories by foreign intelligence services. According to DOE and the FBI, the operation of an effective counterintelligence program is predicated upon a realistic and comprehensive examination of the foreign intelligence and insider threats. For example, according to the FBI, only a comprehensive threat assessment can address the issue of whether foreign intelligence services are making a concerted effort to target DOE laboratories, and if so, how they can work together to counter the threat. This threat assessment can also provide senior managers with an analysis of the global threat and the information and technologies at DOE and the laboratories that are most at risk. Specific assessments, which are targeted studies that focus on country-specific issues, and annual foreign visitor statistical studies are also important because they can inform the laboratories about items of counterintelligence concern. This information can then be used by counterintelligence officers at each laboratory to mitigate the potential risk to that laboratory and its employees. For instance, information contained in these studies can be used to alert a laboratory’s senior management and staff during briefings. While DOE officials recognize the importance of both types of assessments, DOE headquarters’ counterintelligence analysis has focused on the specific-type assessments and has not addressed the overall threat to its facilities. In recent years, DOE has done about 25 specific assessments, which have examined specific threats or, in some cases, have been statistical studies. For example, DOE has assessed the threat of Russian organized crime to DOE and Pakistan’s access to DOE’s resources. In many cases, such studies were based on the work of other agencies, such as the CIA or FBI, or were contracted out. While these studies can be useful in identifying a threat on a single issue, they do not relate the global foreign intelligence threat to the local situation at a specific weapons laboratory. DOE counterintelligence officials at headquarters said that they need to do a comprehensive threat assessment that relates the global foreign intelligence threat to the laboratories, but they have been limited in their ability to do so. They said that specific threat assessments have had a higher priority because these studies meet the more immediate needs of the laboratories. Moreover, DOE’s Counterintelligence Division has not had the staffing or analytical expertise required for this effort. In this regard, DOE’s counterintelligence officials said that they will need to rely on information from other agencies to do a comprehensive threat assessment. Recognizing the need for a comprehensive threat assessment, in the fall of 1996 the then Deputy Secretary of Energy directed each of the weapons laboratories to conduct its own threat assessment, which DOE would then use to develop an overall, comprehensive threat assessment. Although the laboratories are in the process of completing their site threat assessments, according to a DOE counterintelligence official, the Department may not be able to develop a comprehensive assessment unless its priorities change and DOE receives assistance from the U.S. intelligence agencies in obtaining the sensitive intelligence information that is critical to develop this assessment. Oversight of the laboratories’ counterintelligence programs and their activities—particularly setting expectations for program performance and periodically evaluating it—is one of the major responsibilities of DOE’s Counterintelligence Division. However, DOE has not developed meaningful performance measures or expectations for the laboratories’ counterintelligence programs or conducted periodic evaluations of them. DOE’s oversight, however, has been hampered, in part, because the funding for their programs has been through laboratory overhead accounts instead of directly from DOE. Meaningful performance measures for the laboratories’ counterintelligence programs are important because they would help gauge whether or not those programs are achieving their intended purposes. According to DOE Order 5670.3, Counterintelligence Program, DOE is responsible for developing and implementing performance measures for counterintelligence activities throughout the Department. However, according to a counterintelligence official at headquarters, DOE has not developed any performance measures or expectations to evaluate the laboratories’ counterintelligence programs because DOE’s contracts with the laboratories do not obligate their counterintelligence programs to follow any such measures DOE may develop. According to this official, DOE is considering both amending those contracts to address this problem and issuing guidance and policy to define performance measures and expectations for the laboratories to follow and be evaluated against. This will be done after DOE completes its comprehensive threat assessment. DOE’s periodic evaluations of the laboratories’ counterintelligence programs are also important because they help provide assistance to each laboratory as well as determine the effectiveness of their programs. DOE’s counterintelligence order requires that the headquarters Counterintelligence Division oversee the implementation of counterintelligence policy and procedures at the laboratories. However, officials from that division could identify only one review it has conducted at the weapons laboratories, which occurred in 1996 in the form of a “staff assistance visit” conducted at Los Alamos. DOE concluded from this visit that because of inadequate staffing, Los Alamos’ counterintelligence program was not comprehensive and only minimally accomplished the requirements of DOE’s counterintelligence order. At that time, Los Alamos had one counterintelligence officer. Livermore and Sandia have not had their counterintelligence programs reviewed by DOE headquarters. According to a DOE official, evaluations at Livermore and Sandia have not occurred because of other higher-priority work, such as the specific type of threat assessments mentioned earlier. In addition, they said that DOE cannot require its laboratories to implement any recommendations that might result from such evaluations. Without periodic evaluations of all their counterintelligence programs, assessing their effectiveness and objectively comparing one program with another will be difficult. One factor that makes control by DOE headquarters over the laboratories difficult is that the counterintelligence programs are not funded directly by DOE’s Counterintelligence Division. Until recently, each laboratory’s program has been funded entirely from that laboratory’s funds and, consequently, each laboratory operated its program autonomously. Accordingly, each laboratory’s commitment to its program has differed, as illustrated by the difference in staffing levels. For example, while Livermore’s counterintelligence program had 5.5 staff years in 1996, Los Alamos’ program had only 1.1 staff years, despite having almost twice as many visitors from sensitive countries. According to the FBI, which has examined DOE’s counterintelligence program, the structure of DOE and its relationship with contractor-operated laboratories have resulted in their having assumed a high degree of autonomy. This has resulted in a gap between authority and responsibility, particularly when national interests compete with the specialized interests of the academic or corporate management that operate the laboratories. Furthermore, the FBI found that this autonomy has made national guidance, oversight, and accountability of the laboratories’ counterintelligence programs arduous and inefficient. Moreover, DOE’s Counterintelligence Division lacks direct management oversight and control to ensure the laboratories comply with its policies. This frequently puts the each laboratory’s counterintelligence staff in an awkward, if not difficult, situation of dividing their loyalties between the interests of the laboratory in pursuing cutting-edge research and development and the need to safeguard U.S. national security interests. DOE has recently recognized that its counterintelligence program has been inadequate and has taken steps to strengthen it. The Congress appropriated $5 million to DOE in counterintelligence funding for fiscal year 1997 in addition to its budget request, and DOE has used much of these funds to support the counterintelligence programs at the weapons laboratories. In November 1996, DOE’s Deputy Secretary expressed concerns about the presence of foreign visitors at the laboratories, and as a result, several departmentwide corrective actions are now underway. In the spring of 1996, the director of DOE’s Office of Energy Intelligence briefed the staff of several congressional committees about the concerns raised by the increasing number of foreign visitors to its laboratories and the threat they posed. In the fall of that year, the Congress provided DOE with an additional $5 million for fiscal year 1997 to expand counterintelligence activities at its weapons laboratories and other high-risk facilities. Of the $5 million, about half—$2.47 million—went to the three nuclear weapons laboratories. The additional funds were used to increase the number of counterintelligence staff at those laboratories and for counterintelligence-related analyses. As a result, DOE has increased the counterintelligence staff at the weapons laboratories. On November 21, 1996, the then Deputy Secretary of Energy initiated several corrective measures to improve DOE’s foreign visitors program. The Deputy Secretary met with officials of five DOE facilities: the three weapons laboratories, the Oak Ridge National Laboratory, and the Pacific Northwest Laboratory. Among the corrective measures the Deputy Secretary and these officials agreed to complete during fiscal year 1997 were the following: Develop training in export control and provide that training to laboratory staff at those five facilities. Develop new guidance on unclassified, but sensitive, subjects (i.e., matters unsuitable for discussion with a foreign visitor). Develop laboratory threat assessments of foreign visits and assignments. Develop a DOE-wide comprehensive threat assessment of foreign visits and assignments. However, counterintelligence officials at headquarters expressed concerns about DOE’s ability to complete these initiatives because DOE has historically given its counterintelligence program a low priority and the tendency for the laboratories to resist headquarters management. They said that they are hopeful that DOE’s current Secretary will support these initiatives in the counterintelligence program. With the end of the Cold War, DOE’s nuclear weapons laboratories have been moving away from secret research toward more open and cooperative research with a variety of nations and an increasing number of foreign nationals. Open collaboration can greatly benefit DOE and the United States by stimulating the exchange of ideas and promoting cooperation. This in turn can lead to more efficient research and increase the likelihood of important scientific discoveries. While recognizing that such cooperation is beneficial, it is important to note that foreign espionage efforts against DOE’s weapons laboratories may be more active than ever. Furthermore, these efforts may have expanded to include industrial espionage. All of this puts new burdens on DOE’s security. To respond to these challenges, DOE cannot entirely rely on systems left over from the Cold War. For a long time, DOE’s security controls have emphasized “guns, guards, and gates,” as well as strict control over anyone, including foreign visitors, allowed to enter the weapons laboratories. Where visitors went, whom they talked to, and what they saw were more carefully controlled than they are today. These controls, while still necessary in some places, cannot be expected to work in locations where openness, collaboration, and free access to information and ideas are encouraged. In these places, DOE needs a more sophisticated security strategy that is consistent with the laboratories’ more open missions and includes a greater role played by DOE and laboratory counterintelligence programs. Now more than ever, effective counterintelligence efforts must be central to DOE’s security strategy. Greater counterintelligence program effectiveness can be achieved through the development of a comprehensive threat assessment to determine the nature, extent, and targets of foreign espionage efforts against DOE’s weapons laboratories. Such an assessment could also form the basis for developing counterintelligence program performance measures as well as periodic headquarters evaluations of each laboratory’s performance. These evaluations would determine how effectively each laboratory is addressing the established performance measures and how their counterintelligence programs can be improved. In addition to establishing performance measures for DOE’s counterintelligence program, other parts of the overall strategy could be improved by clarifying what constitute sensitive subjects, tightening procedures for background checks, and reassessing procedures for foreign visits to controlled areas. For example, clarifying what subjects are sensitive and requiring an independent review by technically qualified personnel of all subjects proposed for discussion during a visit would help ensure that researchers, program managers, and DOE headquarters officials would have the same understanding of what needs to be protected so discussions of sensitive subjects would not occur without the knowledge of DOE. DOE and laboratory officials recognize the problems with identifying sensitive subjects and have established internal review processes to better focus on those foreign visits that involve sensitive subjects. However, without a clear understanding of what information DOE considers sensitive, these improved review processes cannot provide adequate assurance that foreign visits involving sensitive subjects are appropriately identified and reviewed. Increasing the number of background checks on foreign visitors from sensitive countries will enable DOE to better assess individual situations from a security point of view. When necessary, actions can then be taken to mitigate the risks of a particular visit. While background checks cannot identify all foreign visitors who pose a risk, they are a valuable tool for alerting DOE and the laboratories of situations that may warrant more attention and control. DOE’s current foreign visitor order contains requirements that would increase the number of background checks obtained; enforcing those requirements at the laboratories, especially at Los Alamos and Sandia, should enable DOE to expand its advance knowledge of risks associated with the visits and, if necessary, mitigate those risks. Finally, a specific assessment of vulnerabilities related access to unclassified, but sensitive information in controlled areas is needed. This assessment will help ensure that procedures for these areas are consistent from laboratory to laboratory and security vulnerabilities and/or problems are identified and corrected. In addition, this assessment could identify best practices that DOE could disseminate for use to all laboratories for improving the protection of sensitive information that may be exposed to foreign visitors. We recommend that the Secretary of Energy: Direct DOE’s Counterintelligence Division to perform a comprehensive assessment of the espionage threat against DOE and the weapons laboratories to serve as the basis for determining appropriate countermeasures and resource levels for laboratory counterintelligence programs. To the extent possible, this assessment should include the laboratories as well as other agencies with appropriate expertise, such as the FBI and CIA. Establish appropriate program performance measures and expectations for the laboratories’ counterintelligence activities and require periodic performance reviews to help determine if their activities are effectively preventing foreign espionage. Revise DOE’s foreign visitor order to (1) clarify to all DOE and laboratory contractor personnel the specific types of unclassified, but sensitive, subjects that require protection from compromise by foreign nationals and (2) require that the subjects of visits be independently reviewed by experts with appropriate technical backgrounds—such as laboratory individuals involved in export control issues—to verify that visits involving sensitive subjects are adequately identified for DOE’s review. Require that DOE and the weapons laboratories comply with the current foreign visitor order by obtaining background checks on all assignees from sensitive countries. Further, require the laboratories to inform headquarters of the names of all other proposed foreign visitors from sensitive countries so DOE’s Counterintelligence Division can obtain additional background checks at its discretion. Require that security measures at each laboratory’s controlled areas—those most accessible to foreign visitors—be assessed to ensure that the controls over persons and information in these areas are effective. This assessment should also identify the best practices at each laboratory to improve protection of sensitive information that may be exposed to foreign visitors. DOE had no comments on the general nature of the facts in the report and concurred with our recommendations. The Department, however, believes that the report overstates the value of background checks on foreign visitors. DOE believes that foreign intelligence services increasingly rely on “non-official collectors”—who would have clear background checks—instead of intelligence officers. We do not believe we are overvaluing background checks. We recognize that these checks are but one factor DOE considers in approving foreign visits. Nevertheless, the information obtained through background checks can be of importance in determining if additional risk is associated with a foreign visitor. Consequently, we are recommending that DOE complete background checks in accordance with its foreign visits and assignments order. DOE also suggested that we revise our recommendation on the assessments of information security in controlled areas. A key point in DOE’s suggested revision was to have the recommendation specify that an operations security assessment be done of each laboratory’s controlled areas, whereas we recommended only that an assessment be done. We did not revise our recommendation to specify this type of assessment because, while we believe that operations security principles and personnel must be part of any assessment of the laboratories’ controlled areas, other elements of DOE’s security programs can also provide value in an assessment. We do not want to be overly prescriptive on how and/or by whom these assessments be done. DOE also suggested that the wording of the recommendation more clearly focus on protecting sensitive information. We revised the recommendation to clarify that the assessments should identify the best practices to improve the protection of sensitive information. Finally, DOE’s response detailed a number of actions it has taken or plans to take to address the recommendations. We did not address these actions as part of our work. The full text of DOE’s comments are included in appendix IV. | Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) controls over foreign visitors to its three nuclear weapons laboratories, focusing on DOE's: (1) procedures for reviewing the backgrounds of foreign visitors and for controlling the dissemination of sensitive information to such visitors; (2) security controls for limiting foreign visitors' access to areas and information within its laboratories; and (3) counterintelligence programs for mitigating the potential threat posed by foreign visitors. GAO noted that: (1) DOE's procedures for obtaining background checks and controlling the dissemination of sensitive information are not fully effective; (2) DOE has procedures that require obtaining background checks, but these procedures are not being enforced; (3) at two of the laboratories, background checks are conducted on only about 5 percent of the foreign visitors from countries that DOE views as sensitive; (4) GAO's review of available data from DOE and the Federal Bureau of Investigation (FBI) showed that some of the individuals without background checks have suspected foreign intelligence connections; (5) furthermore, DOE's procedures lack clear criteria for identifying visits that involve sensitive subjects and process controls to help ensure that these visits are identified; (6) as a result, sensitive subjects may have been discussed with foreign nationals without DOE's knowledge and approval; (7) DOE's security controls, such as access restrictions, in the areas most visited by foreign nationals do not preclude their obtaining access to sensitive information, and problems with the control of this information--such as sensitive information being left in an open hallway accessible to foreign visitors--have occurred at the laboratories; (8) furthermore, DOE has not evaluated the effectiveness of the security controls over this information in those areas most frequented by foreign visitors; (9) the DOE headquarters and laboratory counterintelligence programs are key activities for identifying and mitigating foreign intelligence efforts, but these programs have lacked comprehensive threat assessments, which identify likely facilities, technologies, and programs targeted by foreign intelligence; (10) such assessments are needed as a critical component of a more sophisticated security strategy that is consistent with the laboratories' more open missions; (11) furthermore, DOE could use these assessments to develop the performance measures needed to guide the laboratories' counterintelligence programs and to gauge their effectiveness; and (12) currently, DOE has not developed such performance measures or evaluated the effectiveness of its counterintelligence programs. |
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security and vessel escort, and those related to the Coast Guard’s traditional missions, such as search and rescue and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft, some of which it is currently modernizing or replacing through its Deepwater Program. Since 2001, we have reviewed the Deepwater Program and have reported to Congress, DHS, and the Coast Guard on the risks and uncertainties inherent in the acquisition. In our July 2009 report on the Coast Guard’s progress in fulfilling the role of systems integrator for the Deepwater Program, we found that the Coast Guard had increased its role in managing the requirements, determining how assets would be acquired, defining how assets would be employed, and exercising technical authority in asset design and configuration. In addition, we found that the Coast Guard was taking steps to improve its insight into individual assets by reviewing and revising cost, schedule, and performance baselines. Additional insight gained by the review of several assets revealed that the program’s 2007 baselines for acquisition cost and delivery schedules had been exceeded. We concluded that while the steps the Coast Guard was taking were beneficial, continued oversight and improvement were necessary to further mitigate risks. We made several recommendations, which the Coast Guard has taken actions to address. For example, we recommended that the Coast Guard not exercise options under the Fast Response Cutter (Sentinel class) contract until the project was brought into full compliance with the MSAM and DHS acquisition directives. Coast Guard program officials stated that the program was in compliance with these directives before the low-rate initial production option was exercised in December 2009. At the start of the Deepwater Program in the late 1990s, the Coast Guard chose to use a system-of-systems acquisition strategy. A system-of-systems is a set or arrangement of assets that results when independent assets are integrated into a larger system that delivers unique capabilities. The Coast Guard provided ICGS with broad, overall performance specifications— such as the ability to interdict illegal immigrants—and ICGS determined the assets needed and their specifications. According to Coast Guard officials, the ICGS proposal was submitted and priced as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. In November 2006, the Coast Guard submitted a revised cost, schedule, and performance baseline for the overall Deepwater Program to DHS that reflected post-September 11 missions. That baseline established the total acquisition cost of the ICGS solution at $24.2 billion and projected that the acquisition would be completed in 2027. In May 2007, shortly after the Coast Guard had announced its intention to take over the role of systems integrator, DHS approved the baseline. DHS too has changed its approach to oversight and management of the Deepwater Program. In 2003, the department had delegated approving acquisition decisions at key points in the life cycle of individual assets to the Coast Guard, while retaining some oversight at the system-of-systems level and requiring annual reviews. In September 2008, in response to our recommendation, DHS rescinded that authority from the Coast Guard, and began officially reviewing and approving acquisition decisions for Deepwater assets. In November 2008, DHS also instituted requirements for new acquisition documentation at key program decision points to be submitted by DHS components, including the Coast Guard. Figure 1 provides a time line of key events in the Deepwater Program. As we reported in July 2009, since assuming the role of systems integrator in April 2007, the Coast Guard has taken a number of key steps to reassert its control and management of the Deepwater Program. While decreasing the scope of work under the ICGS contract, which as noted above is scheduled to expire in January 2011, the Coast Guard has also reorganized its own acquisition directorate to better fulfill its expanded roles in acquiring and managing Deepwater assets. In addition, the Coast Guard formalized new relationships among its directorates to better establish and maintain technical standards for Deepwater assets related to design, construction, maintenance, C4ISR, and life-cycle staffing and training. The Coast Guard also began transitioning to an asset-based acquisition approach—as opposed to the former approach that focused at the high- level system-of-systems approach—guided by the formalized process outlined in its MSAM. As a part of its asset-based acquisition approach, the Coast Guard has also begun to develop better-informed cost, schedule, and performance baselines. While these new baselines provided increased insight into what the Coast Guard is buying, the anticipated cost, schedules, and performance of many of the assets have changed since the $24.2 billion system-level baseline was approved by DHS in 2007. Table 1 describes in more detail the assets the Coast Guard plans to procure or upgrade under the Deepwater Program. DHS has revised its approach to managing and overseeing Deepwater by conforming the program to its recently finalized acquisition directive, Acquisition Management Directive 102-01, which establishes a number of review points for the department’s acquisitions to provide senior acquisition officials insight into such key documents as baselines and test reports. DHS has increased the number of reviews of individual Deepwater assets and plans to review up to six assets in fiscal year 2010. For its part, the Coast Guard’s MSAM is generally aligned with DHS directives although operational testing policies are still being revised, and the Coast Guard has developed additional guidance on completing key requirements documents. The Coast Guard is also decreasing its dependence on ICGS by planning for alternate vendors on some of the assets already in production, as well as awarding and managing work outside of the ICGS contract for those assets at earlier stages of the acquisition life cycle. Since our last report, DHS has finalized its Acquisition Management Directive 102-01, effective January 2010, which provides guidance on planning and executing acquisitions by linking DHS’s requirements, resourcing, and acquisition processes. The four phases of the DHS acquisition life-cycle process, each of which is authorized by an acquisition decision event, are as follows. The first phase identifies the specific functional capabilities needed for the asset and how these capabilities fill identified gaps. The second phase explores alternative solutions to provide these capabilities and establishes cost, schedule, and performance baselines as well as operational requirements. By the end of this phase, a decision event is held which reviews the selection of the preferred alternative and approves program start. The third phase is focused on developing, testing, and evaluating the selected alternative and refining it prior to entering full production. This phase can contain multiple decision events depending on the complexity of the program. DHS approval is sometimes required for supporting acquisitions and activities such as procuring demonstrator assets for test and evaluation, service contracts, and low-rate initial production. In order to proceed into the fourth phase, a final decision event is held to review the results of formal operational testing and determine if the asset meets requirements and is supportable and sustainable within cost baselines. This decision event authorizes full-rate production and transfers responsibility for deployment and support to the DHS component. Figure 2 depicts the DHS acquisition phases and decision events and where Deepwater assets currently fall within the process. Acquisition review boards are the principal mechanism DHS uses to oversee major acquisitions. These boards, which include DHS executives from the cost, management, and test and evaluation directorates, evaluate the progress of an asset at the acquisition events described above. The review boards make recommendations about asset acquisition decisions and, according to officials, can request the revision of key documents, like life-cycle cost estimates and test plans. For example, because of concerns about operational testing on the Maritime Patrol Aircraft, the DHS review board recommended that the aircraft’s “obtain” acquisition phase be extended, keeping the aircraft in low-rate, rather than full-rate, production. In another example, the DHS review board authorized low-rate initial production of three additional Fast Response Cutters (Sentinel class); however, it asked that the Coast Guard revise some documentation, such as the plans for logistics support and life-cycle cost estimates. According to Coast Guard program officials, this documentation has been submitted to DHS. DHS has increased the frequency with which it holds Deepwater acquisition decision events: it held no reviews in fiscal year 2008 and three in fiscal year 2009; thus far three have been held in fiscal year 2010 and an additional three are planned. Coast Guard program and project managers told us that the level of DHS scrutiny and questions has increased significantly, which has led to constructive discussions and improvements. However, Coast Guard and DHS approval of key documentation such as program baselines can take months. Table 2 provides approval times for the most recent Deepwater asset baselines. Coast Guard officials stated that DHS approval of these documents is an iterative process that can take some time but they coordinate informally to speed approvals when necessary. According to officials, Coast Guard and DHS officials are working together to reduce the approval times for key program documents. For example, the Coast Guard now forwards a draft version of key acquisition documents, such as requirements documentation and cost estimates, to DHS at the same time that it is being reviewed within the Coast Guard. This approach gives DHS an earlier opportunity to review and comment. To support the continued procurement of Deepwater assets, the Coast Guard’s MSAM is generally aligned with DHS’ Acquisition Management Directive 102-01. As a result of this and other changes, the MSAM now requires additional requirements documentation—referred to as the concept of operations and the preliminary operational requirements document—to ensure traceability through the design, development, and testing of an asset. In particular, the MSAM requires that the capabilities directorate, known as CG-7, describe clearly and in detail what specific functional capabilities will be filled with a proposed asset or system, the relationship of a proposed asset to existing assets or systems, and how the asset is expected to be used in actual operations. As we have previously reported, determining an asset’s requirements early in the life cycle is essential, as requirements ultimately drive the performance and capability of an asset and should be traceable through design, development, and testing to ensure that needs are met. Generation of Coast Guard requirements documentation is now guided by USCG Publication 7-7 Requirements Generation and Management Process, which was released by CG-7 in March 2009. The previous lack of overarching, formalized guidance had often resulted in requirements that were vague, not testable, not prioritized, and not supportable or defendable. The Coast Guard has also expanded the key stakeholders involved in the requirements process to include not only the operational users and the capabilities directorate, but also the acquisitions directorate, technical authorities, support and maintenance authorities, and budget officials. One area where the DHS guidance and the MSAM are still not fully aligned is the issue of the independent test authority, the entity responsible for concurring that an asset’s test and evaluation master plan ensures adequate demonstration of an asset’s ability to meet operational needs. Last year, we reported that the MSAM appeared to be inconsistent with DHS guidance regarding the role of this test authority. The DHS Acquisition Guidebook states that the test authority should be independent of both the acquirer and the user, while the MSAM allows the Coast Guard’s requirements directorate—CG-7, which represents the end user—to serve as the test authority. We recommended that the Coast Guard consult with the DHS Office of Test & Evaluation and Standards on this apparent conflict. Both DHS and the Coast Guard are in the process of revising their policies to address this issue. Coast Guard officials state that a new version of the MSAM will be released this summer, and that they are working with DHS to determine which entities may act as test authorities for specific assets. In May 2009, DHS released its test and evaluation directive which states that the test authority may be organic to the component—the Coast Guard in this case—another government agency, or a contractor but must be independent of the developer and the development contractor. In commenting on this directive, DHS officials stated that the test authority should be independent of the acquisition division but can be within another division of the component acquiring the asset, including those representing the asset’s end user. According to DHS officials, it is preferred that a test authority independent of both the acquirer and the user representative conduct operational testing for assets whose life-cycle costs are at or exceed $1 billion. This independent test authority is already in place for some of the Deepwater assets, including the NSC, the Maritime Patrol Aircraft, and the Fast Response Cutter (Sentinel class). However, for assets below this threshold, operational testing may be planned and conducted by the user, subject to approval by the department. As the Coast Guard has assumed the Deepwater systems integrator role, the extent of its reliance on ICGS continues to decrease. ICGS remains the prime contractor for four Deepwater assets: the NSC, HC-130J Long-Range Surveillance Aircraft, the Maritime Patrol Aircraft, and C4ISR, but some of these assets are transitioning away from ICGS. Contracts for other assets at earlier stages of the acquisition process, such as the Fast Response Cutter (Sentinel class), were awarded outside of the ICGS contract. The status of Deepwater assets with contracts in place for production as of July 2010 is as follows. While ICGS remains under contract for the production of the third NSC, the USCGC Stratton, the Coast Guard plans to contract directly with Northrop Grumman Ship Systems, previously a subcontractor for ICGS, on a sole-source basis to produce the remaining five cutters. Two additional Maritime Patrol Aircraft and eight removable electronic command and control mission system pallets also remain on contract with ICGS. The Coast Guard intends to hold a limited competition for the additional aircraft in order to retain the same airframe, issuing a request for proposals in April 2010 for up to nine aircraft over the next 5 years. According to Coast Guard officials, the procurement strategy for additional mission systems pallets is still in development. The Coast Guard is preparing to move the HC-130J into the sustainment phase as it nears the end of this acquisition, with ICGS’ delivery of the sixth and final aircraft on May 27, 2010. Development of C4ISR, a key Deepwater asset referred to as the “glue” intended to make all assets interoperable, is currently in transition from ICGS. Under the 2007 Deepwater baseline, the C4ISR project was to consist of four segments of capability, plus upgrades to Coast Guard shore facilities and legacy cutters. According to program officials, C4ISR will now comprise eight segments, including the capabilities planned for Deepwater and additional capabilities for post-9/11 homeland security missions. ICGS has delivered the first segment, which is currently in operation on the NSC, Maritime Patrol Aircraft, and HC-130J, and is under contract to develop the second segment. This second segment is primarily focused on increasing the Coast Guard’s ability to develop and maintain future capabilities. It is considered a bridge to begin the transition from the ICGS-developed architecture to a Coast Guard-developed and managed architecture by ensuring that the ICGS systems are operational and supported while the Coast Guard puts in place its own capability to support the systems. Program officials state that development of the third segment has been delayed due to funding constraints, although development of capabilities for key assets, such as the Offshore Patrol Cutter, will continue. According to officials, the acquisition strategy for future C4ISR segments has not been determined. The Coast Guard structured the acquisition of the Fast Response Cutter (Sentinel class) as the systems integrator, competitively awarding a lead ship design and production contract to Bollinger Shipyards in September 2008 for the lead cutter. The Coast Guard has exercised contract options for hulls 2 though 4, with the goal of having up to 15 cutters either delivered or under contract by 2012. Currently, the Deepwater Program as a whole exceeds the cost and schedule baselines approved by DHS in May 2007, and it is unlikely to meet the system-level performance baselines that were approved at that time. The new asset-specific baselines that have been developed—and approved by DHS for seven of nine assets—put the total cost of Deepwater at roughly $28 billion, or $3.8 billion over the $24.2 billion baseline. The revised baselines also present life-cycle costs, which encompass the acquisition cost as well as costs for operations and maintenance throughout the assets’ life cycle. While the revised baselines show a significant decrease in life-cycle costs compared to the 2007 baseline, the Coast Guard’s understanding of these costs continues to evolve as the agency revisits its assumptions and produces new cost estimates. These baselines also indicate that some schedules are expected to be delayed by several years. Preliminary assessments by the Coast Guard indicate that some assets may be at risk for further cost and schedule growth. Further, as the Coast Guard develops more refined requirements, it has redefined or eliminated key performance indicators for many individual assets, while significant uncertainties surround other assets like C4ISR, the key to the system-of-systems as initially envisioned and approved. As a result of the way Deepwater was implemented in the past, some assets—including the NSC, Maritime Patrol Aircraft, and HC-130J—have begun deployment and operations, but their ability to fully satisfy operational requirements is unproven as they have not yet undergone operational evaluations. Further, because the Coast Guard has not determined the overall quantities and mix of assets needed for Deepwater in light of changes to the 2007 baseline, it is unknown what the overall Deepwater Program should look like going forward. In the meantime, the Coast Guard and DHS are proceeding with acquisition decision events on individual assets. As of July 2010, DHS had approved seven of the revised baselines and the Coast Guard had approved two of them based on a delegation of approval authority from DHS. Regarding total acquisition cost, the Coast Guard has determined that some of the assets will significantly exceed anticipated costs in the 2007 Deepwater baseline. Due to this growth, the total cost of the Deepwater Program is now expected to be roughly $28 billion, or $3.8 billion more than the $24.2 billion that DHS approved in 2007, an increase of approximately 16 percent. For the assets with revised baselines this represents cost growth of approximately 35 percent. Further growth could occur, as four Deepwater assets currently lack revised cost baselines. Among them is the largest cost driver in the program, the 25 cutters of the Offshore Patrol Cutter class which, in the 2007 baseline, accounted for over 33 percent of the $24.2 billion total acquisition cost. Table 3 compares the 2007 and revised baselines of asset acquisition costs available as of July 2010. The table does not reflect the roughly $3.6 billion in other Deepwater costs, such as program management, that the Coast Guard states do not require a new baseline. These revised baselines reflect the Coast Guard’s and DHS’ improved understanding of the acquisition costs of individual Deepwater assets, as well as insight into the drivers of the cost growth. We reported last year on some of the factors contributing to increased costs for the NSC and Maritime Patrol Aircraft. More recently, DHS approved the revised baseline for the Fast Response Cutter (Sentinel class) in August 2009. The Coast Guard has attributed this asset’s more than $1 billion rise in cost to the use of actual contract costs from the September 2008 contract award and costs for shore facilities and initial spare parts not included in the original baseline. As the Coast Guard has revised asset baselines for acquisition costs, it has also reevaluated operating costs and their effect on life-cycle costs. According to the 2007 Deepwater baseline, the program’s life-cycle cost was to be approximately $304.4 billion. The life-cycle costs presented in the revised asset baselines decreased by approximately $96 billion, as shown in table 4. This substantial reduction in life-cycle costs is due in part to new assumptions applied by the Coast Guard in calculating the costs to support and maintain its assets. In preparing the revised baselines, the Coast Guard updated its assumptions by reducing the time it expects certain assets to continue in operations. Any reduction of the years in service for an asset reduces the total life-cycle cost, as the overall cost for operating the asset would decrease. For example, the useful life of the HH-65 was reduced from 40 years to 23 years of extended service, contributing to a $47 billion reduction in life-cycle costs in the revised baseline. According to the Coast Guard, a 40-year extended service life for the HH-65 was not realistic, as the first of these assets became operational in 1984 and upgrades to extend the service life will not enable the helicopters to operate for an additional 40 years. The service life expected of the HH-60 was also reduced, from 30 years of additional service to 20, which contributed to its $25.2 billion decrease in life-cycle costs. Assumptions for the expected service life of the Fast Response Cutter (Sentinel class) also changed as a result of selecting an alternate design for production. The current Sentinel class design is expected to have a service life of 20 years, less than ICGS’ proposed Fast Response Cutter-A—which had an estimated service life of 35 years—but more than its proposed Fast Response Cutter-B, which had a proposed 15-year service life. While altering these assumptions does reduce the expected life-cycle costs associated with the current Deepwater Program, it also indicates that the Coast Guard may need to acquire new assets sooner than anticipated in the 2007 baseline. The Coast Guard also used different assumptions about what support costs were included in its revised baselines. For example, the life-cycle costs in the revised baselines for the HH-65, HH-60, and the HC-130J reflect only the costs to support the upgraded mission systems and not the costs of the entire aircraft and therefore appear to be understated. As a result, the stated life-cycle costs for these assets significantly decreased; for example, in the case of the HC-130J costs decreased from $6.6 billion to $430 million. However, the Coast Guard’s understanding of life-cycle costs continues to evolve. DHS approved all the revised Deepwater asset baselines on the condition that the Coast Guard resubmit life-cycle cost estimates. According to Coast Guard officials, DHS also requested that new estimates for the HC-130J, HH-60, and HH-65 reflect the cost to support the entire aircraft. As of July 2010, the Coast Guard has submitted life-cycle cost estimates for eight assets: NSC, Fast Response Cutter (Sentinel class), Maritime Patrol Aircraft, HC-130J, C4ISR, HH-65, and the two mission effectiveness programs. These estimates suggest that some assets may meet the revised cost baselines while others are in danger of exceeding them. Table 5 compares the revised baselines to the Coast Guard’s current life-cycle cost estimates. As shown in the table above, expected life-cycle costs for some assets, such as the NSC and the Fast Response Cutter (Sentinel class), continue to decrease as more information about the actual costs to operate and acquire these assets is used to refine estimates. The expected life-cycle costs of other assets, however, have increased beyond their current baselines. Coast Guard officials told us they have worked to make their life-cycle cost estimates consistent, in keeping with DHS guidance, and plan to update them every 12 to 18 months. A discussion of the estimates for the NSC, Fast Response Cutter (Sentinel class), Maritime Patrol Aircraft, C4ISR, and the HH-65 follows. The current estimate for the NSC is $7.4 billion below the revised baseline for life-cycle costs even when additional costs are added to the estimate to account for identified risks. These risks include unstable C4ISR requirements, which could result in modifications to the ship, and the Coast Guard’s change in contract type for construction of the last five NSCs from cost-reimbursement to fixed price-incentive fee. Generally, cost-reimbursement contracts are suitable only when uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use a fixed-price contract—such as the lack of cost experience in performing the work or unstable manufacturing techniques or specifications. Under cost-reimbursement contracts, most of the cost risk is placed on the government, while under fixed-price incentive fee contracts an increased share of cost performance risk is borne by the builder. Because of this additional risk, the cost estimate assumed that the contract price would increase. The current life-cycle cost estimate for the Fast Response Cutter (Sentinel class) is also below its revised life-cycle cost baseline, by $2.5 billion, even after additional costs were added to account for risks. The most significant risk is attributable to the Coast Guard’s acquisition approach for this asset. The government plans to procure a total of 58 cutters. Under the contract for design and production of the first patrol boat, the government plans to procure 24-34 boats, with the remaining portion to be competitively procured, potentially resulting in a change of contractor. This competition would be for construction of the remaining boats utilizing the same design. The Coast Guard adopted this acquisition strategy as a means of reducing overall risk under the contract. The current cost estimate states that there could be an increase in cost if a new contractor were brought on board, potentially modifying the design to fit its construction processes in addition to establishing the production line and learning how to more efficiently produce the boats. The cost estimate also presents risks in the estimates of operating costs. As the Sentinel class has never been used operationally, these costs were determined by using historical data on similar ships and discussions with the intended Coast Guard user, meaning true costs are unknown and could exceed or be lower than the current estimates. Uncertainty about future fuel costs also drives risk. The $12.2 billion increase between the current life-cycle cost estimate and the revised baseline for the Maritime Patrol Aircraft is primarily attributable to a difference in assumptions about crew sizes and cost per flight hour, which affect the cost to operate the aircraft. Further, additional costs for training devices are now included in the estimate. The primary risks discussed in the estimate, which have also added costs, are the Euro/dollar exchange rate and the cost to maintain the aircraft over time. Because a portion of the aircraft the Coast Guard currently has under contract is produced in Europe, any fluctuation in the strength of the dollar could have an effect, positive or negative, on the aircraft’s cost. The estimate also states that long-term maintenance of the mission systems pallet could be problematic if parts become obsolete, a risk identified for other systems dependant on C4ISR-intensive systems. The current life-cycle cost estimate for C4ISR places the cost at $6.7 billion, well above the $1.3 billion baseline established in 2007. This estimate presents, for the first time, a full life-cycle cost for this capability, as the 2007 baseline presented only acquisition costs for C4ISR and assumed that operations and maintenance costs were included in the baselines for individual assets. This increase is attributed to the changing nature of the program and the risks involved. When the Coast Guard made the decision to become systems integrator, it also assumed greater oversight of the software development and maintenance associated with C4ISR. The Coast Guard intends to establish laboratories to develop, integrate, and support this software, which accounts for a portion of the cost increase. According to program officials, costs have also increased due to maintenance needs, especially the need for upgrades to keep software and information secure. The risks are driven primarily by technical uncertainty due to undefined requirements in later segments and the effect of technology changes on C4ISR capabilities in the future. As the Coast Guard has not yet fully defined the capabilities it wants from C4ISR, it is difficult to assess the associated costs. The interrelated nature of segments, with each segment building upon and enhancing the capabilities of prior segments, could lead to cascading effects on cost and schedule if one is delayed. To account for these uncertainties, the Coast Guard built additional costs into the estimate. The current life-cycle cost estimate for the HH-65 Multi-mission Cutter Helicopter is $8.2 billion—$1.9 billion above the cost stated in the revised baseline. The majority of the increase is due to a change in the assumptions about the costs to operate and maintain the asset over its life cycle. As mentioned previously, the revised baseline included only the costs to support the upgraded mission systems aboard the HH-65. The current cost estimate includes support for the entire aircraft and raises the cost of operations and maintenance from $5.164 billion to $7.033 billion. The current cost estimate also takes into account risks the aircraft may encounter in the further development of its upgraded mission systems and risks that could increase operational costs. The risks discussed in the estimate include the possibility of a structural redesign or installation issues associated with a new sub-system that improves the helicopter’s ability to land on the NSC, the possibility of software or labor cost growth for other upgrades, and the uncertainty surrounding the future price of fuel. To account for these uncertainties, the Coast Guard built additional costs into the estimate. The Coast Guard’s reevaluation of asset baselines has also improved insight into the schedules for when assets are expected to begin operations—also known as initial operational capability—and when all assets have been delivered and are ready for operations—or full operational capability. For example, the Fast Response Cutter (Sentinel class) patrol boat is now scheduled to deliver the final asset by September 2021, rather than 2016 as stated in the 2007 baseline—a delay of 5 years. The HH-60 Medium Range Recovery helicopter will also not complete deliveries until later than planned due to a restructuring of scheduled upgrades. This asset will now complete upgrades by 2020, a 1-year delay from the previous baseline. The schedule to upgrade the capabilities of the HH-65 Multi-mission helicopter has also been restructured, but a date for completing all the necessary upgrades has not yet been determined. Table 6 provides more information on changes in asset schedules. In addition to establishing cost and schedule baselines, the 2007 Deepwater acquisition program baseline also established a baseline for system-of-systems level performance and the key performance parameters at the asset level that contribute to this performance. This system-level baseline remains important, as the Coast Guard continues to pursue system-of-systems level effects even as it devolves its approach to Deepwater management to an asset level. According to the Coast Guard’s 2005 mission needs statement, the intent of the Deepwater Program was to improve the capability to detect, intercept, and interdict potential threats in the maritime domain using a layered defense of major cutters, patrol boats, helicopters, unmanned aerial vehicles, and maritime patrol aircraft, all connected using a single command and control architecture. This description is still valid given that the Coast Guard is still pursuing the same types of assets and capabilities proposed by ICGS. The 2007 baseline describes thresholds and objectives for three system-level performance requirements. Available mission hours: Establishes the numbers of hours surface and aviation assets must perform on an annual basis to meet mission needs. Surveillance of nautical square miles: Describes system-level effects specific to an NSC acting in concert with its embarked HH-65 helicopter and unmanned aerial vehicles. System task sequence: Establishes the number of nautical square miles in which the fully deployed Deepwater Program is capable of searching for, identifying, and prosecuting targets of interest per day. The specific capabilities to be achieved under these overarching performance requirements are listed in table 7. account, however, how these changes affect system-of-systems level requirements, although officials state that those requirements are being revalidated. Some assets or capabilities key to the performance of the Deepwater Program as a whole—including the 25 ships of the Offshore Patrol Cutter class, the capabilities provided by the integrated C4ISR system, and the cutter-based Unmanned Aerial Vehicle essential to extending major cutter surveillance times and ranges—remain in development. The capabilities provided by C4ISR are particularly important to achieving the performance required for Deepwater. These systems are at the core of every Coast Guard activity and provide the essential situational awareness, data processing, interoperability, and records accountability and transparency necessary to successfully execute the Coast Guard’s many missions. If the designs of these assets, and therefore the performance criteria they are able to meet, were to be significantly different than those proposed under the ICGS baseline, the system’s ability to achieve the higher-level performance requirements set forth in the 2007 system-level baseline would be doubtful. To determine whether Deepwater assets can meet their revised performance baselines, the Coast Guard has performed operational and capability assessments, through formalized test procedures or through limited operations, on a number of assets. Three of the Deepwater assets—the NSC, Maritime Patrol Aircraft, and HC-130J—have begun limited operations although they have not undergone formal testing to determine whether capabilities meet requirements. The Fast Response Cutter (Sentinel class) has undergone an early operational assessment to determine whether its capabilities meet requirements, and the Coast Guard plans to conduct an operational evaluation of the asset in 2011. Additional information on the status of operational testing for these assets follows. The first NSC completed an assessment of its operational capabilities in 2007, before final delivery to the Coast Guard, and has since been performing limited operations from its homeport in Alemeda, California. While it has completed some missions successfully, shortfalls in the expected overall capabilities have been noted. Specifically, the lack of unmanned air vehicles limits the full capability of the cutter to conduct surveillance as reflected in the 2007 performance baseline. The Coast Guard is also continuing to address design problems with the NSC’s small boat launch and recovery systems. The operational evaluation for the NSC is currently scheduled to begin in 2011; however, there are some aspects of the cutter’s performance that will not be demonstrated at that time. Coast Guard officials stated that the NSC will not demonstrate the ability to operate for 230 days away from port. This demonstration requires the use of four sets of crews to operate three cutters at different times in order to maintain operations without exceeding regulations governing how long crews can remain at sea. This multicrewing concept could have an effect on the maintenance needs of these vessels or on personnel deployment times. The Coast Guard states that it will not fully demonstrate this multicrewing capability until 2014 or 2015, when three cutters are available for operations. In addition, the operational evaluation will not demonstrate the ability of an unmanned aerial system to operate as intended from the NSC, as the Coast Guard has not selected an appropriate unmanned system and has not indicated when it plans to do so. According to officials, some demonstrations of the ability of an unmanned system to take off and land on the cutter may take place, but operational missions with an unmanned aerial system will not be performed. The Maritime Patrol Aircraft underwent an operational assessment in 2009 using aircraft previously delivered to the Coast Guard. This asset, too, has been used in limited operations before completing operational evaluation. Program officials stated that while the aircraft itself is performing well in those limited operations, the mission systems pallet—which contributes significantly to operational capabilities—has previously experienced reliability and maintenance challenges. The Coast Guard is working to address these challenges by updating the software and hardware. Currently, the Maritime Patrol Aircraft is expected to provide 1,200 hours of operational performance per year. Coast Guard officials stated that the ability of the aircraft to achieve this will be demonstrated in fiscal year 2011 during the aircraft’s operational evaluation. The HC-130J did not undergo any operational testing or assessments conducted by an independent operational test authority and none are planned. The current approved operational requirements document, which establishes the performance baseline for the aircraft and should be reflected in the key performance criteria to which the asset is tested, was signed in 2003 and does not necessarily reflect the current capabilities or established baseline for the aircraft. According to officials, the Coast Guard and DHS have developed a report that defines the aircraft’s performance by describing the demonstrations that have already been conducted to quantify the characteristics of the aircraft and mission systems—such as the performance capabilities of the radar. This report, however, is not akin to a test plan that demonstrates the aircraft is able to meet operational needs. Determining the capabilities in this manner makes it difficult to assess whether the aircraft meets asset-level or system-level capabilities. However, DHS and the Coast Guard have agreed that no further testing or documentation is necessary, as production for the aircraft is complete. The Fast Response Cutter (Sentinel class) is one of the few Deepwater assets to undergo an early operational assessment, conducted by an independent test authority—the Navy’s Commander Operational Test and Evaluation Force—prior to the project’s critical design review, which allowed for early detection and rectification of issues. According to Coast Guard program and Navy test officials, all but five minor items recommended for correction as a result of this assessment were addressed prior to the design review. However, program and test officials stated that the cutter will not undergo an additional assessment before as many as 15 of the expected 58 vessels are under contract and operational testing is completed. If significant issues are found in testing, these vessels may have to undergo costly modifications. The Coast Guard acknowledges the risks inherent in this approach and states that it is reducing risk by conducting testing of the patrol boat’s design and subsystems and closely monitoring the contractor’s performance during production. While the Coast Guard has made progress in revising baselines for the cost, schedule, and capabilities of individual assets, it has not yet revalidated the quantities of those assets needed to meet operational needs—as it stated that it would in assuming the role of systems integrator. Determining the force structure and size of the Deepwater Program, specifically the number and type of assets needed to meet mission demands, is key to managing the acquisition and will have an impact on the final cost and performance of the program. The Coast Guard planned to complete a comprehensive fleet mix analysis in July 2009 to eliminate uncertainty surrounding future mission performance and to produce a baseline for the acquisition. The analysis, which began in October 2008—and is now termed the fleet mix analysis Phase I—was led by the capabilities directorate and included a review of all Deepwater missions and assets. Assumptions on asset capabilities were based on the capabilities of the current fleet as well as the capabilities that are projected for the Deepwater assets. In most cases, Coast Guard officials stated, Deepwater assets retained the capabilities determined by ICGS with a few exceptions. For example, the Offshore Patrol Cutter was assumed to operate away from port for 230 days out of the year as envisioned by ICGS, but the Maritime Patrol Aircraft was assumed to operate for 800 instead of 1,200 flight hours per year. For those assets that have evolved significantly since 2007, the analysis made “best guess” assumptions that utilized the capabilities currently being pursued by the Coast Guard. While the 2007 Deepwater baseline was considered the “floor” for asset capabilities and quantities, officials stated that the analysis did not impose financial constraints on the outcome and that, therefore, the result was not feasible in terms of what the Coast Guard could afford. As a result, officials stated that they do not intend to use the results to produce recommendations on a baseline for fleet mix decisions, as originally intended. The results of the analysis have not been released. As a result of discussions with DHS, the Coast Guard intends to conduct a second, cost-constrained fleet mix analysis Phase II, limited to surface assets. This analysis is being conducted to further validate mission needs, roles, and responsibilities and will produce recommendations on the numbers and types of surface assets the Coast Guard should procure. It is intended to be complete in February 2011. In the meantime, the Coast Guard continues to pursue quantities of planned procurements that, to a large extent, reflect the 2007 baseline. The Coast Guard also completed a study in August 2008 on the appropriate number and type of HC-130 aircraft to procure to meet needs, but no decision has been made yet. The Coast Guard currently operates two models of the HC-130 aircraft: the HC-130H, which entered operations in the 1970s, and the HC-130J, which entered operations in the last few years. Both models were upgraded as part of the Deepwater Program but, given the advanced age and deteriorating state of many of the older HC-130H aircraft, the Coast Guard decided to revalidate how many of each aircraft should be upgraded and maintained. The study concluded that while the HC-130J offered more capability than the HC-130H, and a longer expected life cycle, budgetary concerns prevent retiring all the older aircraft in favor of HC-130Js. Instead, a hybrid plan was proposed to maintain 11, instead of the currently planned 16, HC-130Hs and to increase the numbers of HC- 130Js from the currently planned 6 to 11. However, the Coast Guard has not yet taken the additional actions needed to purchase additional HC- 130Js. Officials stated that any additional acquisitions would necessitate a revalidation of HC-130J requirements and resubmission of much of the asset’s documentation, including baselines and test plans. The Coast Guard sought a systems integrator at the outset of the Deepwater Program in part because its workforce lacked the experience and depth to manage the acquisition internally. As the Coast Guard assumes the role of system integrator it is important that it understand its needs and builds an acquisition workforce to manage the Deepwater Program. One key method the Coast Guard uses is a workforce planning model, modified from a model developed by the Air Force, to improve its estimates of workforce needs. According to Coast Guard officials, input from project managers is used in the model to estimate current and future needs for key personnel such as project managers, contracting officials, and business and financial managers. Officials stated that the output of the model is then discussed in a forum of all the project managers, and requests for additional personnel are then developed and forwarded for inclusion in the budget. Since our last report, the Coast Guard has begun to implement initiatives aimed at further reducing its acquisition workforce gap. One such initiative is the acquisition professional career program, a 3-year internship program that targets engineering and business students for development as civilian acquisition personnel. As of July, the Coast Guard had approximately 20 interns supporting contracting and other program management areas. The career entry opportunity program is another initiative meant to attract qualified employees to the Coast Guard while also promoting career growth for current Coast Guard employees. Participants in the program receive on-the-job training for 2 to 3 years in a variety of positions within the acquisition directorate and, upon completing the program, are permanently placed in positions in the Coast Guard’s acquisition community. Officials said they are also attempting to obtain direct hire authority to streamline the hiring process and avoid delays in placing new hires. Along with enhancing its recruiting and improving its hiring process for civilian personnel, officials discussed how they are attempting to make employment in the acquisitions area more appealing for military personnel by developing an acquisition career path that offers opportunities for advancement similar to other uniformed career paths within the Coast Guard. The Coast Guard has had some success in narrowing the acquisition workforce gap we have reported on in the past. Officials stated that by the end of fiscal year 2009, 11 percent of the Coast Guard’s civilian acquisition workforce positions remained unfilled, down from the 16 percent that the Coast Guard reported for April 2009. In its fiscal year 2010 budget, however, the acquisition directorate received an additional 100 government positions that must be filled. Officials stated that 25 percent of these new positions were going to be allocated to the Offshore Patrol Cutter program, due to the need for more staff as the program prepares to award a design and construction contract, and 40 percent were going to different sponsors and technical authorities that support the acquisition directorate. This increase in number of positions has had an effect on the Coast Guard’s current vacancy rate. As of April 2010, the Coast Guard had a total of 951 government acquisition workforce positions, consisting of 556 civilian positions and 395 military positions. Of these 951 positions, 190 were vacant as of April 2010, leaving a workforce gap of approximately 20 percent. Although workforce gaps remain, the Coast Guard has increased the number of certifications for the acquisition officials it has in place for areas such as program management, business management, and systems engineering. These officials are required to complete specialized training in their respective acquisition career fields in order to manage or execute acquisition contracts at various dollar thresholds. Since April 2009, the Coast Guard reports that it has increased the total number of certified acquisition officials in a number of these types of fields from 593 to 862, an approximately 45 percent increase. The number of certified program managers alone rose from 357 in April 2009 to 601 in June 2010, for an increase of about 68 percent. Although the Coast Guard is attempting to close its acquisition workforce gaps it faces challenges—like many federal agencies that acquire major systems—in recruiting and retaining a sufficient number of government employees in acquisition positions such as contract specialists, cost estimators, system engineers, and program management support. When these gaps cannot be filled, contractors are often used to support the work performed by government staff. For example, the Coast Guard has used support contractors to perform life-cycle cost estimates and to assist in the drafting of program documentation. As shown in figure 3, support contractors made up 24 percent of the acquisition workforce as of April 2010. The Coast Guard acknowledges that the use of support contractors puts it at risk for potential conflicts of interest and the possibility of these contractors functioning in roles that closely support inherently governmental functions. To address conflicts of interest, all solicitations and contracts include appropriate clauses where a potential for conflict may exist, according to Coast Guard officials, and staff are trained on how to identify and manage conflicts of interest. Further, the Coast Guard has made efforts to ensure that support contractors do not perform inherently governmental work. These efforts include releasing guidance to define inherently governmental roles and the roles of government staff in overseeing contractors and ensuring appropriate oversight and approval of work performed. In creating new baselines for individual asset cost, schedule, and performance, the Coast Guard has deepened its understanding of the resources needed and capabilities required on an asset level in a manner that improves oversight and management of the Deepwater Program. As it does so, it is also becoming increasingly clear that the baselines for cost, schedule, and performance established in 2007 cannot be achieved. Because the Coast Guard has not revalidated its system-level requirements, it lacks the analytical framework needed to inform Coast Guard and DHS decisions about asset trade-offs in the future. In the absence of recommendations from the fleet mix analysis, it remains unclear what number of assets are required to meet the Coast Guard’s needs or what trade-offs in capabilities or mission goals are required to control costs in a fiscally constrained environment. To capitalize on the increase in knowledge gained by creating new baselines for Deepwater assets, and to better manage acquisitions of further assets and capabilities, we recommend that the Commandant of the Coast Guard complete, and present to Congress, a comprehensive review of the Deepwater Program that clarifies the overall cost, schedule, quantities, and mix of assets that are needed to meet mission needs and what trade-offs need to be made considering fiscal constraints, given that the currently approved Deepwater baseline is no longer feasible. We provided a draft of this report to the Coast Guard and DHS. DHS provided oral comments via an e-mail stating that it concurred with the recommendation. The Coast Guard provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, and the Commandant of the Coast Guard. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix II. In conducting this review, we relied in part on the information and analysis in our past work, including reports completed in 2008 and 2009. Additional scope and methodology information on each objective of this report follows. To assess changes to the Department of Homeland Security (DHS) and Coast Guard acquisition policies, processes, and approach related to Deepwater since our July 2009 report we reviewed DHS’ Acquisition Directive 102-01, Acquisition Guidebook 102-01-001, Directive 026-06 on test and evaluation, as well as acquisition decision and other memoranda. We also reviewed the Coast Guard’s Major Systems Acquisition Manual (MSAM), Requirements Generation and Management Process, and other policy documents. We also interviewed senior acquisition directorate officials, representatives of the Coast Guard’s capabilities directorate, and representatives of Coast Guard’s technical and support authorities. We also interviewed program and project managers to discuss the effect of the policies and processes on Deepwater assets and spoke with DHS officials about the department’s major acquisition review process and reporting requirements. To determine the contractual status of Deepwater assets we reviewed Coast Guard contracts and acquisition strategies and spoke with contracting and acquisition officials. In addition, we met with contractor and Coast Guard officials at Northrop Grumman facilities in Pascagoula, Mississippi and with Bollinger officials in Lockport, Louisiana. We also met with Coast Guard officials at the Aviation Logistics Center in Elizabeth City, North Carolina; Surface Fleet Logistics Center in Curtis Bay, Maryland; Lockheed Martin facilities in Moorestown, New Jersey; and the Command and Control Engineering Center in Portsmouth, Virginia to discuss their role in upgrading and maintaining Deepwater assets. To assess whether the Deepwater Program is meeting baselines for cost, schedule, and performance, we reviewed the Deepwater Program’s 2007 baseline and compared it to the revised baselines for individual assets that have been approved to date. We also interviewed senior acquisition directorate officials and program and project managers to discuss how the Coast Guard is developing new acquisition program baselines for individual assets and how the process used differs from that in the 2007 baseline, such as the basis for cost estimates. In addition we reviewed the life-cycle cost estimates for selected assets. We also reviewed operational requirements documents for selected assets in various stages of the development and production processes to understand the major drivers of cost growth, schedule delays, and capability changes. We interviewed acquisition directorate officials and program and project managers to discuss options for controlling cost growth by making trade-offs in asset quantities and/or capabilities, as well as some of the potential implications of unplanned schedule delays. We also interviewed Coast Guard officials and analyzed documentation for the fleet-mix analysis and follow-on studies being conducted by the capabilities directorate. In addition we met with Navy and Coast Guard officials at the U.S. Navy’s Commander Operational Test and Evaluation Force in Norfolk, Virginia to discuss their role in conducting operational testing. To assess the Coast Guard’s efforts to manage and build its acquisition workforce, we reviewed Coast Guard information on government, contractor, and vacant positions. We supplemented this analysis with interviews of acquisition directorate officials, including contracting and Office of Acquisition Workforce Management officials and program and project managers to discuss current vacancy rates and the Coast Guard’s plans to increase the size of the acquisition workforce. We also reviewed documentation and interviewed senior acquisition directorate officials about the use of support contractors and oversight to prevent contractors from performing inherently governmental functions. We reviewed documentation such as the updated Acquisition Human Capital Strategic Plan and discussed workforce initiatives, challenges, and obstacles to building an acquisition workforce, including recruitment and difficulty in filling key positions. We conducted this performance audit between October 2009 and July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For further information about this report, please contact John P. Hutton, Director, Acquisition and Sourcing Management, at (202) 512-4841 or [email protected]. Other individuals making key contributions to this report include Michele Mackin, Assistant Director; J. Kristopher Keener; Matthew Alemu; Kelly Bradley; and Kristine Hassinger. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges. GAO-10-411T. Washington, D.C.: February 25, 2010. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets. GAO-09-497. Washington, D.C.: July 17, 2009. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001. | The Deepwater Program includes efforts to build or modernize ships and aircraft and to procure other capabilities. After a series of project failures, the Coast Guard announced in 2007 that it was taking over the systems integrator role from Integrated Coast Guard Systems (ICGS). At the same time, a $24.2 billion program baseline was established which included schedule and performance parameters at an overall system level. GAO has previously reported on the Coast Guard's progress in establishing individual baselines for Deepwater assets and has made a number of recommendations, which have largely been addressed. In response to the conference report accompanying the Department of Homeland Security (DHS) Appropriations Act, 2010, GAO assessed (1) DHS and Coast Guard acquisition policies and approach to managing the program, (2) whether the program is meeting the 2007 baseline, and (3) Coast Guard efforts to manage and build its acquisition workforce. GAO reviewed Coast Guard and DHS policies and program documents, and interviewed officials. DHS has revised its approach to managing and overseeing Deepwater by making the program subject to its recently finalized acquisition directive, which establishes a number of review points to provide insight into such key documents as baselines and test reports. DHS has also increased the number of its reviews of individual Deepwater assets. The Coast Guard's own management policies are generally aligned with DHS directives, although operational testing policies are still being revised, and it has developed additional guidance on completion of key requirements documents. In taking on the systems integrator role, the Coast Guard is also decreasing its dependence on ICGS by planning for alternate vendors on some of the assets already in production, as well as awarding and managing work outside of the ICGS contract for other assets. Currently, the Deepwater Program exceeds the 2007 cost and schedule baselines, and given revisions to performance parameters for certain assets, it is unlikely to meet system-level performance baselines. The asset-specific baselines that have been approved to date, while providing greater insight into asset-level capabilities, place the total cost of Deepwater at roughly $28 billion, or $3.8 billion over the $24.2 billion 2007 baseline. The revised baselines also present life-cycle costs, which encompass the acquisition cost as well as costs for operations and maintenance. While the revised baselines show a significant decrease in life-cycle costs, due to changes to assumptions like shorter service lives for assets, the Coast Guard's understanding of them continues to evolve as the agency revisits its assumptions and produces new cost estimates. Costs could continue to grow as four assets currently lack revised cost baselines; among them is the largest cost driver in the Deepwater Program, the Offshore Patrol Cutter. The asset-level baselines also indicate that schedules for some assets are expected to be delayed by several years. Regarding system-level performance, the 2007 baseline may not be achievable, as the Coast Guard has redefined or eliminated key performance indicators for many individual assets, while significant uncertainties surround other assets. Further, a planned analysis to reassess the overall fleet mix for Deepwater was not completed as planned, and a new analysis will include surface assets only. In the meantime, the Coast Guard and DHS are proceeding with acquisition decisions on individual assets. The Coast Guard continues to take steps to address its acquisition workforce needs as it assumes the role of system integrator. For example, it is using a workforce planning model to estimate current and future needs for key acquisition personnel. The Coast Guard has also begun to implement initiatives such as promoting career growth for acquisition professionals. External limitations on the availability of acquisition personnel, coupled with 100 new positions authorized in fiscal year 2010, place the Coast Guard's acquisition directorate vacancy rate at about 20 percent. While it is using contractors in support roles, the Coast Guard has released guidance regarding the roles of government staff in overseeing contractors. GAO recommends that the Coast Guard complete an overall assessment that clarifies the quantities, mix, and cost of assets needed to meet requirements, given that the current Deepwater baseline is no longer feasible, and that the results be reported to Congress. DHS concurred with the recommendation. |
Aviation-related activities contribute to local air pollution and produce greenhouse gases that cause climate change. Aircraft account for about 70 to 80 percent of aviation emissions, producing emissions that mainly affect air quality below 3,000 feet and increase greenhouse gases at higher altitudes. At ground level, airport operations, including those of motor vehicles traveling to and from the airport, ground service equipment, and stationary sources such as incinerators and boilers, also produce emissions. Together, aircraft operations in the vicinity of the airport and other airport sources produce emissions such as carbon monoxide, sulfur oxides, particulate matter, nitrogen oxides, unburned hydrocarbons, hazardous air pollutants, and ozone that contribute to air pollution. In addition, these sources emit carbon dioxide and other greenhouse gases that contribute to climate change, but aircraft operations in the upper atmosphere are the primary source of aviation-related greenhouse gases. Carbon dioxide is both the primary aircraft emission and the primary contributor to climate change. It survives in the atmosphere for over 100 years. Furthermore, other gases and particles emitted by aircraft— including water vapor, nitrogen oxides, soot, contrails, and sulfate—can also have an impact on climate, but the magnitude of this impact is unknown, according to FAA. Figure 1 illustrates aviation’s impact on air quality and climate. Currently, aviation accounts for a small portion of air pollutants and greenhouse gas emissions. Specifically, aviation emissions represent less than 1 percent of air pollution nationwide, but their impact on air quality could be higher in the vicinity of airports. In addition, aviation accounts for about 2.7 percent of the total U.S. contribution of greenhouse gas emissions, according to the Department of Transportation’s Center for Climate Change and Environment. A 1999 study by the United Nations’ Intergovernmental Panel on Climate Change (IPCC) estimated that global aircraft emissions generally accounted for approximately 3.5 percent of the warming generated by human activity. As air traffic increases, aviation’s contribution to air pollution and climate change could also grow, despite ongoing improvements in fuel efficiency, particularly if other sectors achieve significant reductions. In addition, aviation’s impact on air quality is changing as more fuel-efficient, quieter aircraft engines are placed in service. While new aircraft engine technologies have reduced fuel consumption, noise, and emissions of most pollutants, they have not achieved the same level of reductions in nitrogen oxide emissions, which contribute to ozone formation. According to FAA, nitrogen oxide emissions from aviation will increase by over 90 percent by 2025 without improvements in aircraft emissions technologies and air traffic management, and emissions of other air pollutants will also increase, as shown in figure 2. Additionally, aviation’s greenhouse gas emissions and potential contribution to climate change is expected to increase. IPCC has estimated that aircraft emissions are likely to grow by 3 percent per year, outpacing the emissions reductions achieved through technological improvements. Furthermore, as emissions from other sources decline, aviation’s contribution to climate change may become proportionally larger, according to FAA. Alternative fuels are not yet available in sufficient quantities for jet aircraft, as they are for some other uses, and therefore aviation cannot yet adopt this approach to reduce its greenhouse gas emissions (see discussion below on U.S. efforts to develop alternative fuels for aviation). Aviation emissions, like other combustible emissions, include pollutants that affect health. While it is difficult to determine the health effects of pollution from any one source, the nitrogen oxides produced by aircraft engines contribute to the formation of ozone, the air pollutant of most concern in the United States and other industrialized countries. Ozone has been shown to aggravate respiratory ailments. A National Research Council panel recently concluded that there is strong evidence that even short-term exposure to ozone is likely to contribute to premature deaths of people with asthma, heart disease, and other preexisting conditions. With improvements in aircraft fuel efficiency and the expected resulting increases in nitrogen oxide emissions, aviation’s contribution to ozone formation may increase. In addition, aviation is associated with other air pollutants, such as hazardous air pollutants, including benzene and formaldehyde, and particulate matter, all of which can adversely affect health. Data on emissions of hazardous air pollutants in the vicinity of airports are limited, but EPA estimates that aviation’s production of these pollutants is small relative to other sources, such as on-road vehicles. Nevertheless, according to EPA, there is growing public concern about the health effects of the hazardous air pollutants and particulate matter associated with aviation emissions. See appendix I for more detailed information on the health and environmental effects of aviation emissions. Carbon dioxide and other greenhouse gas emissions from aircraft operations in the atmosphere, together with ground-level aviation emissions that gradually rise into the atmosphere, contribute to global warming and climate change. IPCC’s most recent report documents mounting evidence of global warming and projects the potential catastrophic effects of climate change. As figure 6 shows, climate change affects precipitation, sea levels, and winds as well as temperature, and these changes in turn will increasingly affect economies and infrastructure around the world. Two key federal efforts, if implemented effectively, can help to reduce aviation emissions—near-term NextGen initiatives and an array of R&D programs over the longer term to fully enable NextGen and to reduce aircraft emissions. The NextGen initiatives are primarily intended to improve the efficiency of the aviation system so that it can handle expected increases in air traffic, but these initiatives can also help reduce aviation emissions. In addition, the federal government, led by FAA and NASA, has longer-term R&D programs in place to improve the scientific understanding of the impact of aviation emissions in order to inform decisions about emissions-reduction strategies, explore potential emissions-reducing alternative fuels, and develop NextGen and aircraft emissions-reduction technologies. Technologies and procedures that are being developed as part of NextGen to improve the efficiency of flight operations can also reduce aircraft emissions. According to FAA, the implementation of NextGen could reduce greenhouse gas emissions from aircraft by up to 12 percent. One NextGen technology, considered a centerpiece of NextGen, is the Automatic Dependent Surveillance-Broadcast (ADS-B) satellite aircraft navigation system. ADS-B is designed, along with other navigation technologies, to enable more precise control of aircraft during en route flight, approach, and descent. ADS-B will allow for closer and safer separations between aircraft and more direct routing, which will improve fuel efficiency and reduce carbon dioxide emissions. This improved control will also facilitate the use of air traffic control procedures that will reduce communities’ exposure to aviation emissions and noise. One such procedure, Continuous Descent Arrivals (CDA), allows aircraft to remain at cruise altitudes longer as they approach destination airports, use lower power levels, and thereby lower emissions and noise during landings. Figure 3 shows how CDA compares with the current step-down approach to landing, in which aircraft make alternate short descents and forward thrusts, which produce more emissions and noise than continuous descents. A limited number of airports have already incorporated CDA into their operations. For example, according to officials from Los Angeles International Airport, nearly 25 percent of landings at their airport use CDA procedures in one of the airport’s standard terminal approaches. In addition, United Parcel Service plans to begin using a nighttime CDA procedure, designed and tested at the Louisville International Airport, for its hub operations. Two closely associated NextGen initiatives, Area Navigation (RNAV) and Required Navigation Performance (RNP), have the potential to modify the environmental impact of aviation by providing enhanced navigational capability to the pilot. RNAV equipment can compute an airplane’s position, actual track, and ground speed, and then provide meaningful information on the route of flight selected by the pilot. RNP will permit the airplane to descend on a precise route that will allow it to avoid populated areas, reduce its consumption of fuel, and lower its emissions of carbon dioxide and nitrogen oxides. See figure 4. Currently, over 350 RNAV/RNP procedures are available at 54 airports, including Dallas/Fort Worth, Miami International, Washington Dulles, and Atlanta Hartsfield. Still another NextGen initiative, High-Density Terminal and Airport Operations, is intended to improve the efficiency of aircraft operations at busy airports, and, in the process, reduce emissions. At high-density airports, the demand for access to runways is high, and arrivals and departures take place on multiple runways. The combination of arrivals, departures, and taxiing operations may result in congestion, which in turn produces delays, emissions, and noise as aircraft wait to take off and land. Under the High-Density Terminal and Airport Operations initiative, which FAA has just begun to implement, aircraft arriving and departing from different directions would be assigned to multiple runways and safely merged into continuous flows despite bad weather and low visibility. To guarantee safe separation, these airports would need enhanced navigation capabilities and controllers with access to increased automation. Under this initiative, aircraft would also move more efficiently on the ground, using procedures that are under development to reduce spacing and separation requirements and improve the flow of air traffic into and out of busy metropolitan airspace. More efficient aircraft movement would increase fuel efficiency and reduce emissions and noise. Although the implementation of this initiative is in the early stages, FAA has identified the R&D needed to move it forward. Technologies and procedures planned for NextGen should also help improve the efficiency of flights between the United States and other nations, further reducing emissions, particularly of greenhouse gases. A test program scheduled to begin in the fall of 2008, known as the Atlantic Interoperability Initiative to Reduce Emissions (AIRE), sponsored by FAA and the European Commission, Boeing, and Airbus, will involve gate-to- gate testing of improved procedures on the airport surface, during departures and arrivals, and while cruising over the ocean. Some of the procedures to be tested will use technologies such as ADS-B. A similar effort—the Asia and South Pacific Initiative to Reduce Emissions (ASPIRE)—was launched earlier this year, involving the United States, Australia, and New Zealand. We have previously reported that the federal government and industry have achieved significant reductions in some aircraft emissions, such as carbon dioxide, through past R&D efforts, and federal officials and aviation experts agree that such efforts are the most effective means of achieving further reductions in the longer term. As part of the a national plan for aeronautics R&D, issued by the White House Office of Science and Technology Policy, the federal government supports a comprehensive approach to R&D on aviation emissions that involves FAA, NASA, and other federal agencies. According to FAA, this approach includes efforts to improve the scientific understanding of the nature and impact of aviation emissions and thereby inform the development of more fuel-efficient aircraft, of alternative fuels that can reduce aircraft emissions, and of air traffic management technologies that further improve the efficiency of aviation operations. NASA, industry, and academia are important partners in these efforts. Notably, however, the development of breakthrough technologies, such as highly fuel-efficient aircraft engines that emit fewer greenhouse gases and air pollutants, is expensive and can take a long time, both to conduct the research and to implement the new technologies in new aircraft designs and introduce these new aircraft into the fleet. Successfully developing these technologies also requires the support and cooperation of stakeholders throughout the aviation industry. Improving the scientific understanding of aviation emissions can help guide the development of approaches to reducing emissions by improving aircraft manufacturers’ and operators’ and policy makers’ ability to assess the environmental benefits and costs of alternative policy measures. Such an assessment can then lead to the selection of the alternative that will achieve the greatest net environmental benefits. For example, one technology might greatly increase fuel efficiency, but produce higher nitrogen oxide emissions than another, somewhat less fuel-efficient technology. Overall, a cost benefit analysis might indicate that the less fuel-efficient technology would produce greater net benefits for the environment. FAA currently supports several recent federal efforts to better quantify aviation emissions and their impact through improvements in emissions measurement techniques and modeling capability. One of these efforts is FAA’s Partnership for Air Transportation and Emissions Reduction (PARTNER) Center of Excellence. Created in 2003, PARTNER carries on what representatives of airlines, aircraft and engine manufacturers, and experts in aviation environmental research have described as a robust research portfolio. This portfolio includes efforts to measure aircraft emissions and to assess the human health and welfare risks of aviation emissions and noise. For example, researchers are developing an integrated suite of three analytical tools—the Environmental Design Space, the Aviation Environmental Design Tool, and the Aviation Environmental Portfolio Management Tool – that can be used to identify interrelationships between noise and emissions. Data from these three tools, together with the Aviation Environmental Design tool being developed by the Volpe National Transportation Systems Center and others, will allow for assessing the benefits and costs of aviation environmental policy options. Another R&D initiative, the Airport Cooperative Research Program (ACRP), conducts applied research on aviation emissions and other environmental issues facing airports. The program is managed by the National Academies of Science through its Transportation Research Board under a contract with FAA, which provided $10 million for the program in both 2007 and 2008 and is seeking to increase these investments through its reauthorization to specifically focus on aviation environmental issues. Several of the emissions-related projects undertaken through ACRP have concentrated on developing methods to measure particulate matter and hazardous air pollutants at airports in order to identify the sources of these pollutants and determine whether their levels could have adverse health effects. FAA has also developed an Aviation Emissions Characterization roadmap to provide a systematic process to enhance understanding of aviation’s air quality emissions, most notably particulate matter and hazardous air pollutants. In addition, FAA, in conjunction with NASA and the National Oceanic and Atmospheric Administration, launched the Aviation Climate Change Research Initiative to develop the scientific understanding necessary for informing efforts to limit or reduce aviation greenhouse gas emissions. Another effort, the Commercial Aviation Alternative Fuels Initiative (CAAFI), led by FAA, together with airlines, airports, and manufacturers, is intended to identify and eventually develop alternative fuels for aviation that could lower emissions of greenhouse gases, and other pollutants; increase fuel efficiency; and reduce U.S. dependence on foreign oil. CAAFI supports research on low-carbon fuel from sources such as plant oils, algae, and biomass that are as safe as petroleum-based fuel and compare favorably in terms of environmental impact. Part of the research will involve assessing the environmental impact of alternative fuels to determine whether their use could reduce emissions of pollutants that affect climate and air quality. The research will also assess the impact of producing these fuels on the overall carbon footprint. The CAAFI sponsors have set goals for certifying a 50 percent synthetic fuel for aviation use in 2008, a 100 percent synthetic fuel for use by 2010, and a biofuel made from renewable resources such as palm, soy, or algae oils. As part of CAAFI, Virgin Atlantic Airlines, together with Boeing, has tested a blend of kerosene (normal jet fuel) and biofuels in a flight from London to Amsterdam, and Continental, in association with Boeing and jet engine manufacturer General Electric, is planning a similar test in 2009. NASA has devoted a substantial portion of its aeronautical R&D program to the development of technologies critical to the implementation of NextGen, as well as new aircraft and engine technologies, both of which can help reduce aviation emissions. NASA has three main aeronautics research programs – Fundamental Aeronautics, Aviation Safety, and Airspace Systems – each of which contributes directly and substantially to NextGen. For example, the Airspace Systems program supports research on air traffic management technologies for NextGen, and the Fundamental Aeronautics program focuses on removing environmental and performance barriers, such as noise and emissions, that could constrain the capacity enhancements needed to accommodate projected air traffic increases. Appendix II describes in more detail how NASA’s aeronautics R&D programs support the implementation of NextGen. NASA also works with aircraft and aircraft engine manufacturers to increase fuel efficiency and reduce emissions. Their efforts have contributed to a number of advancements in aircraft engine and airframe technology, and NASA’s R&D on emissions-reduction technologies continues. NASA has set technology-level goals for reducing greenhouse gases, nitrogen oxides, and noise, which have become part of the U.S. National Aeronautics Plan. For example, the plan includes a goal for developing technologies that could reduce nitrogen oxide emissions during landings and takeoffs by 70 percent below the ICAO current standard. The plan also sets a goal of increasing fuel efficiency (and thereby decreasing greenhouse gases emissions) by 33 percent. These technologies would be incorporated in the next generation of aircraft, which NASA refers to as N+1, by 2015. However, as NASA officials note, these goals must be viewed within the context that each of the goals can be fully met only if it is the only goal. For example, the goal for reducing nitrogen oxides can be fully achieved only at the expense of the goals for lowering greenhouse gas emissions and noise, because it is technologically challenging to design aircraft that can simultaneously reduce all of these environmental impacts. For the longer term (2020), NASA is focusing on developing tools and technologies for use in the design of advanced hybrid-wing body aircraft, the following generation of aircraft, or N+2. Emissions from these aircraft would be in the range of 80 percent below the ICAO standard for nitrogen oxide emissions during landings and takeoffs, and fuel consumption would be 40 percent less than for current aircraft. The U.S. aircraft and engine manufacturing industry has also set goals for reducing aircraft emissions in the engines the industry plans to produce. According to the Aerospace Industries Association, which represents this industry, its members have set a goal of reducing carbon dioxide emissions by 15 percent in the next generation of aircraft while continuing to significantly reduce nitrogen oxide emissions and noise. The development of aircraft technologies such as those that NASA is currently working on to reduce emissions can take a long time, and it may be years before the technologies are ready to be incorporated into new aircraft designs. According to FAA, the development process generally takes 12 to 20 years. For example, the latest Pratt and Whitney engine, the geared turbofan, which is expected to achieve significant emissions and noise reductions, took 20 years to develop. Reducing aviation emissions includes steps that FAA and others can take to move the implementation of NextGen forward and support R&D on NextGen and emissions-reduction technologies, as well as technical, financial, regulatory challenges facing the federal government, the aviation industry, and Congress. Implementing NextGen expeditiously is essential to handle the projected growth in air traffic efficiently and safely, and in so doing, help to reduce aircraft emissions. Steps to advance NextGen’s implementation include management improvements and the deployment of available NextGen components. Several management actions are important to advance the implementation of NextGen. One such action is to establish a governance structure within FAA that will move NextGen initiatives forward efficiently and effectively. FAA has begun to establish a governance structure for NextGen, but it may not be designed to give NextGen initiatives sufficient priority to ensure the system’s full implementation by 2025. Specifically, FAA’s implementation plan for NextGen is called the Operational Evolution Partnership (OEP). The manager responsible for OEP is one of nine Vice Presidents who report to the Chief Operating Officer (COO) of FAA’s Air Traffic Organization (ATO), who reports directly to the FAA Administrator. While the manager responsible for OEP is primarily responsible for implementing NextGen, other Vice Presidents are responsible for NextGen-related activities in their designated areas. In addition, the FAA managers responsible for airports and aviation safety issues are Associate Administrators who report through the Deputy FAA Administrator to the FAA Administrator. Some of the activities for which these Associate Administrators are responsible are critical to NextGen’s implementation, yet there is no direct line of authority between the OEP manager and these activities. Some congressional leaders and other stakeholders, including aviation industry representatives and aviation experts, view FAA’s management structure for NextGen as too diffuse. Some of the stakeholders have called for the establishment of a position or NextGen program office that reports directly to the FAA Administrator to ensure accountability for NextGen results. These stakeholders have expressed frustration that a program as large and important as NextGen does not follow the industry practice of having one person with the authority to make key decisions. They point out that although the COO is nominally in charge of NextGen, the COO must also manage FAA’s day-to-day air traffic operations and may therefore not be able to devote enough time and attention to managing NextGen. In addition, these stakeholders note that many of NextGen’s capabilities span FAA operational units whose heads are on the same organizational level as the head of OEP or are outside ATO, and they believe that an office above OEP and these operational units is needed. In prior work, we have found that programs can be implemented most efficiently when managers are empowered to make critical decisions and are held accountable for results. Another management action is needed to help ensure that FAA acquires the skills required for implementation, such as contract management and systems integration skills. Because of the scope and complexity of the NextGen implementation effort, FAA may not have the in-house expertise to manage it without assistance. In November 2006, we recommended that FAA examine its strengths and weaknesses and determine whether it has the technical expertise and contract management expertise that will be needed to define, implement, and integrate the numerous complex programs inherent in the transition to NextGen. In response to our recommendation, FAA has contracted with the National Academy of Public Administration (NAPA) to determine the mix of skills and number of skilled persons, such as technical personnel and program managers, needed to implement the new OEP and to compare those requirements with FAA’s current staff resources. In December 2007, NAPA provided FAA with its report on the types of skills FAA will require to implement NextGen, and it has undertaken a second part of the study that focuses on identifying any skill gaps between FAA’s current staff and the staff that would be required to implement NextGen. NAPA officials told us that they expect to publish the findings of the second part of the study in the summer of 2008. We believe this is a reasonable approach that should help FAA begin to address this challenge as soon as possible. It may take considerable time to select, hire, train, and integrate into the NextGen initiative what could be a large number of staff. we found that program managers at highly successful companies were empowered to decide whether programs were ready to move forward and to resolve problems and implement solutions. In addition, program managers were held accountable for their choices. (LSI)–that is, a prime contractor who would help to ensure that the discrete systems used in NextGen will operate together and whose responsibilities may include designing system solutions, developing requirements, and selecting major system and subsystem contractors. However, this approach would require careful oversight to ensure that the government’s interests are protected and could pose significant project management and oversight challenges for the Joint Planning and Development Office (JPDO), the organization within FAA responsible for planning NextGen, and for FAA. Moving from planning to implementing some components of NextGen can begin to demonstrate the potential of the system as well as reduce congestion in some areas of the country, thereby also reducing emissions. Many of the technologies and procedures planned for NextGen are already available, and a few have been implemented individually, such as the CDA procedures in use in Los Angeles and Louisville and ADS-B in Alaska. However, the available technologies and procedures have not yet been deployed simultaneously to demonstrate that they can be operated safely as an integrated suite of technologies and procedures in the national airspace system. Several stakeholders have suggested that FAA consider a gradual rollout of NextGen technologies and procedures in a particular area. For example ADS-B technologies, CDA and RNAV/RNP procedures, and high-density airport operations could be deployed in a defined area of the current system, possibly in sequence over time, to test their combined use and demonstrate the safety of an integrated suite of NextGen advancements. Such a graduated rollout is sometimes referred to as “NextGen Lite.” FAA is currently considering a demonstration project in Florida and Georgia, in which it, together with aviation equipment manufacturers and municipalities, would use the NextGen capabilities of ADS-B, RNAV, and RNP for on-demand air taxi fleet operations. As other NextGen capabilities, such as System-Wide Information Management (SWIM ), are deployed and as air taxi fleet operations move to other airports and regions, the demonstration will be expanded to include those new capabilities and other airports and regions. According to the airlines and other stakeholders we interviewed, a demonstration of the successful integration of NextGen capabilities and of efficiencies resulting from their use would give the airlines an incentive to equip their aircraft with NextGen technologies. They could then lower their costs by reducing their fuel consumption and decrease the impact of their operations on the environment. The findings from our research indicate that such regional or targeted demonstrations could accelerate the delivery of NextGen benefits while helping to ensure safe operations within the current system. In addition, demonstrations can increase stakeholders’ confidence in the overall NextGen initiative. Federal funding for aeronautics research, the category that includes work on aviation emissions, has declined over the past decade, particularly for NASA, which historically provided most of the funding for this type of research. NASA’s current aeronautics research budget is about half of what it was in the mid-1990s. Moreover, the budget request for aeronautics R&D for fiscal year 2009 is $447 million, or about 25 percent less than the $594 million provided in fiscal year 2007. (See table 1.) According to NASA, about $280 million of the proposed $447 million would contribute to NextGen. In addition, according to NASA officials, a significant portion of the funding for subsonic fixed-wing aircraft is directed toward emissions-related research, and many other research efforts contribute directly or indirectly to potential emissions-reduction technologies. As its funding for aeronautics R&D has declined, NASA has emphasized fundamental research, which serves as the basis for developing technologies and tools that can later be integrated into aviation systems, and has focused less on developmental and demonstration work. As a result, NASA is now sometimes developing technologies to a lower maturity level than in the past, and the technologies are less ready for manufacturers to adopt them, resulting in a gap in the research needed to bring technologies to a level where they can be transferred to industry for further development. Failure to address this gap could postpone the development of emissions-reduction technologies. As a partial response to the gap, the administration has proposed some additional funding for FAA that could be used to further develop NASA’s and others’ emissions- and noise reduction technologies. Specifically, FAA’s reauthorization proposal seeks $111 million through fiscal year 2011 for the CLEEN Engine and Airframe Technology Partnership, which FAA officials said is intended to provide for earlier maturation of emissions and noise technologies while NASA focuses on longer-term fundamental research on noise and emissions. The CLEEN partnership, which is also contained in the House’s FAA reauthorization bill, would create a program for the development and maturation of certifiable engine and airframe technologies for aircraft over the next 10 years which would reduce aviation noise and emissions. The legislation would require the FAA Administrator, in coordination with the NASA Administrator, to establish objectives for developing aircraft technology outlined in the legislation. The technology requested to be developed would increase aircraft fuel efficiency enough to reduce greenhouse gas emissions by 25 percent relative to 1997 subsonic jet aircraft technology, and, without increasing other gaseous or particle emissions, reduce takeoff-cycle nitrogen oxide emissions by 50 percent relative to ICAO’s standard. Although FAA’s reauthorization bill has not yet been enacted, the administration’s proposed fiscal year 2009 budget includes $10 million for the CLEEN program. The CLEEN program would be a first step toward further maturing emissions and noise reduction technologies, but experts agree that the proposed funding is insufficient to achieve needed emissions reductions. While acknowledging that CLEEN would help bridge the gap between NASA’s R&D and manufacturers’ eventual incorporation of technologies into aircraft designs, aeronautics industry representatives and experts we consulted said that the program’s funding levels may not be sufficient to attain the goals specified in the proposal. According to these experts, the proposed funding levels would allow for the further development of one or possibly two projects. Moreover, in one expert’s view, the funding for these projects may be sufficient only to develop the technology to the level that achieves an emissions-reduction goal in testing, not to the level required for the technology to be incorporated into a new engine design. Nevertheless, according to FAA and some experts we consulted, the CLEEN program amounts to a pilot project, and if it results in the development of emissions-reduction technologies that can be introduced into aircraft in the near future, it could lead to additional funding from the government or industry for such efforts. FAA and NASA have identified the R&D that is needed for NextGen, but have not determined what needs to be done first, at what cost, to demonstrate and integrate NextGen technologies into the national airspace system. Completing this prioritization is critical to avoid spending limited funds on lower-priority efforts or conducting work out of sequence. Once the identified R&D has been prioritized and scheduled, cost estimates can be developed and funds budgeted. Prioritizing research needs is an essential step in identifying the resources required to undertake the research. The European Union is investing substantially in R&D that can lead to fuel-efficient, environmentally friendly aircraft. In February 2008, the European Union announced the launch of the Clean Sky Joint Technology Initiative, with total funding of $2.4 billion over 7 years—the European Union’s largest-ever research program. The initiative establishes a Europe- wide partnership between industry, universities, and research centers and aims to reduce aircraft emissions of carbon dioxide and nitrogen oxides by up to 40 percent and aircraft noise levels by 20 decibels. According to FAA, it is difficult to compare funding levels for U.S. and European R&D efforts because of differences in program structures and funding mechanisms, Nevertheless, foreign government investments of such magnitude in R &D on environmentally beneficial technologies could reduce the competitiveness of the U.S. aircraft manufacturing industry, since greater investments are likely to lead to greater improvements in fuel efficiency and keep U.S. aircraft manufacturers competitive in the global economy as well as reducing aviation’s impact on the environment. Reducing aviation emissions will require technological advances, the integration of lower-emitting aircraft and NextGen technologies into airline fleets, and strengthened or possibly new regulations to improve air quality and limit greenhouse gas emissions. Fulfilling these requirements will pose challenges to aviation because of the technical difficulties involved in developing technologies that can simultaneously address air pollutants, greenhouse gases, and noise; constraints on the airline industry’s resources to invest in new aircraft and technologies needed to reduce emissions and remain competitive; and the impact that emissions regulations can have on the aviation system’s expansion and the financial health of the aviation industry. Although the aviation industry has made strides in lowering emissions, more reductions are needed to keep pace with the projected growth in aviation, and achieving these reductions will be technically challenging. NASA’s efforts to improve jet engine designs illustrate this challenge: While new designs have increased fuel efficiency, reduced most emissions, and lowered noise, they have not achieved comparable reductions in nitrogen oxide emissions. Nitrogen oxide emissions have increased because new aircraft engines operate at higher temperatures, producing more power with less fuel and lower carbon dioxide and carbon monoxide emissions, but also producing higher nitrogen oxide emissions, particularly during landings and takeoffs, when engine power settings are at their highest. It is during the landing/takeoff cycle that nitrogen oxide emissions also have the greatest impact on air quality. As discussed, nitrogen oxides contribute to ground-level ozone formation. Similarly, as we noted in a report on NASA’s and FAA’s aviation noise research earlier this year, it is technologically challenging to design aircraft engines that simultaneously produce less noise and fewer greenhouse gas and other emissions. Although it is possible to design such engines, the reductions in greenhouse gases could be limited in engines that produce substantially less noise. NASA and industry are working on technologies to address these environmental trade-offs. For example, the Pratt & Whitney geared turbo fan engine that we mentioned earlier is expected to cut nitrogen oxide emissions in half while also improving fuel efficiency and thereby lowering carbon dioxide emissions. Nevertheless, it remains technologically challenging to design aircraft that can reduce one environmental concern without increasing another. In a 2004 report to Congress on aviation and the environment, FAA noted that the interdependencies between various policy, technological, and operational options for addressing the environmental impacts of aviation and the full economic consequences of these options had not been appropriately assessed. However, in recent years, FAA has made progress in this area, including its sponsorship of the previously mentioned PARTNER study on the interrelationships between noise and emissions. This study can be used to assess the costs and benefits of aviation environmental policy options. Most U.S. airlines have stated that they plan to invest in aircraft and technologies that can increase fuel efficiency and lower emissions, but in the near term, integrating new aircraft into the fleet, or retrofitting aircraft with technologies that can improve their operational efficiency, poses financial challenges to the airline industry. Aircraft have an average lifespan of about 30 years, and the airlines can take almost that entire period to pay for an aircraft. The current fleet is, on average, about half as many years old—11 years for wide-body aircraft, and 14 years for narrow- body aircraft—and therefore is expected to be in operation for many years to come. In addition, the financial pressures facing many airlines make it difficult for them to upgrade their fleets with new, state-of-the-art aircraft, such as the Boeing 787 and Airbus A380, which are quieter and more fuel efficient, emitting lower levels of greenhouse gases. Currently, U.S. carriers have placed a small proportion (40, or less than 6 percent) of the over 700 orders that Boeing officials say the company has received for its 787 model. Furthermore, no U.S. carriers have placed orders for the new Airbus 380. These financial pressures also limit the airlines’ ability to equip new and existing aircraft with NextGen technologies such as ADS-B that can enable more efficient approaches and descents, resulting in lower emissions levels. FAA estimates that it will cost the industry about $14 billion to equip aircraft to take full advantage of NextGen. Delays by airlines in introducing more fuel-efficient, lower-emitting aircraft into the U.S. fleet and in equipping or retrofitting the fleet with the technologies necessary to operate NextGen could limit FAA’s ability to efficiently manage the forecasted growth in air traffic. Without significant reductions in emissions and noise around the nation’s airports, efforts to expand their capacity could be stalled and the implementation of NextGen delayed because of concerns about the impact of aviation emissions. As we previously reported, offering operational advantages, such as preferred takeoff and landing slots, to fuel-efficient, lower-emitting aircraft or aircraft equipped with ADS-B could create incentives for the airlines to invest in the necessary technologies. Similarly, as noted, deploying an integrated suite of NextGen technologies and procedures in a particular region could create incentives for carriers to equip their aircraft with NextGen technologies. Concerns about the health effects of air pollutants have led to more stringent air quality standards that could increase the costs or delay the implementation of airport expansion projects. In recent years, EPA has been implementing a more stringent standard for ozone emissions to better protect the health of people exposed to it, and this standard could require more airports to tighten controls on nitrogen oxides and some types of volatile organic compounds that also contribute to ozone formation. Under the current standard, 122 airports are located in areas that are designated as nonattainment areas. This number includes 43 of the 50 busiest U.S. commercial service airports. In March 2008, EPA further revised the ozone standard, because new evidence demonstrated that exposure to ozone at levels below the level of the current standard are associated with a broad array of adverse health effects. This recent revision to the ozone standard will increase the number of U.S. counties, and hence airports, that will be in nonattainment. EPA estimated that the number of affected counties could potentially grow from 104 to 345 nationwide. While the exact number of airports that will be affected has not been officially determined at this time, FAA estimates that a modest number of commercial service airports in California, Arizona, Utah, Texas, Oklahoma, Arkansas, and along the gulf coast to Florida will be in nonattainment areas for the revised 8-hour ozone standard. According to EPA, any development project beginning in 2011 at these airports would have to conform to the state implementation plan. As communities gain more awareness of the health and environmental effects of aviation emissions, opposition to airport expansion projects, which has thus far focused primarily on aviation noise, could broaden to include emissions. According to a California air quality official, many of the same communities that have interacted with airports over aviation noise have more recently recognized that they could also be affected by emissions from airport sources. In Europe, concerns about the impact of aviation on air quality and climate change have led to public demands for tighter control over aircraft emissions, and these demands have hindered efforts to expand airports in Birmingham, and London (Heathrow). Moreover, a plan to expand London’s Stansted Airport was rejected because of concerns about climate change that could result from additional emissions. To minimize constraints on the future expansion of airport capacity stemming from concerns about the health and environmental effects of aviation emissions, it will be important for airports; the federal and state governments; and the airline industry to work together to accurately characterize and address these concerns and to take early action to mitigate emissions. As noted, constraints on efforts to expand airports or aviation operations could affect the future of aviation because the national airspace system cannot expand as planned without a significant increase in airport capacity. The doubling or tripling of air traffic that FAA expects in the coming decades cannot occur without additional airports and runways. Concerns about the environmental effects of greenhouse gas emissions have grown steadily over the years, leading to national and international efforts to limit them. In the In the United States, EPA has not regulated greenhouse gas emissions; however, Congress is taking steps to deal with climate change, some of which could include market-based measures that would affect the aviation industry. For example, several bills were introduced in the 110th Congress to initiate cap and trade programs for greenhouse gas emissions None of these bills would include aviation directly in a cap and trade program. However, some could have indirect consequences for the aviation industry by, for example, requiring fuel producers to purchase allowances through the system to cover the greenhouse gas content of the fuel they sell to the aviation sector. The cost of purchasing these allowances could be passed on to fuel consumers, including airlines, raising the cost of jet fuel. Fuel is already the airline industry’s largest cost. According to the Air Transport Association, cap and trade programs that significantly increase airline fuel costs could have significant consequences for the industry and such programs could make it more difficult for carriers to pay for aircraft or technologies that would reduce greenhouse gas emissions. As we have previously noted, cap and trade programs can cost-effectively reduce emissions of greenhouse gases such as carbon dioxide, especially when compared with other regulatory programs. However, it is important that the impact of such measures on various sectors of the economy, such as the aviation industry, be thoroughly considered. Internationally, ICAO has not set standards for aircraft carbon dioxide emissions, but it has been working, with the support of FAA, other government aviation authorities, and the aviation industry, to develop a strategy for addressing the impact of aviation on climate change, among several efforts to address climate change. For example, ICAO published a manual for countries, Operational Opportunities to Minimize Fuel Use and Reduce Emissions. In 2004, ICAO endorsed the development of an open emissions trading system as one option countries might use and endorsed draft guidance for member states on establishing the structural and legal basis for aviation’s participation in a voluntary open trading system. The guidance includes information on key elements of a trading system, such as reporting, monitoring, and compliance, while encouraging flexibility to the maximum extent possible. In adopting the guidance last fall at the ICAO Assembly, all 190 Contracting States—with the exception of those in the European Union—agreed that the inclusion of one country’s airlines in another country’s emissions trading system should be based on mutual consent between governments. Consistent with the requirement to pursue reductions of greenhouse gas emissions from international aviation through ICAO, some countries that have included the aviation sector in their emissions trading systems or other emissions-reduction efforts have, excluded international flights. Consequently, these countries’ efforts will not affect U.S. airlines that fly into their airports. The European Union (EU), however, is developing legislation, which has not been finalized, that would include both domestic and international aviation in an emissions trading scheme. As proposed, the EU’s scheme would apply to air carriers flying within the EU and to carriers, including U.S. carriers, flying into and out of EU airports in 2012. For example, under the EU proposal, a U.S. airline’s emissions in domestic airspace as well as over the high seas would require permits if a flight landed or departed from an EU airport. Airlines whose aircraft emit carbon dioxide at levels exceeding prescribed allowances would be required to reduce their emissions or to purchase additional allowances. Although the legislation seeks to include U.S. airlines within the emissions trading scheme, FAA and industry stakeholders have argued that U.S. carriers would not legally be subject to the legislation. While the EU’s proposal to include international aviation in its emissions trading system is intended to help forestall the potential catastrophic effects of climate change, according to FAA and airlines, it will also affect the aviation industry’s financial health. In particular, according to FAA and airline and aircraft and engine manufacturing industry representatives, the EU’s proposal could disadvantage U.S. airlines, which have older, less fuel- efficient fleets than their European competitors. Paying for emissions credits could, according to U.S. airlines, also leave them with less money for other purposes, including investing in newer, more fuel-efficient aircraft and technologies to improve flight efficiency and reduce fuel usage. Furthermore, according to U.S. carriers, the proposed trading scheme unfairly penalizes the aviation sector because it lacks a readily available non-carbon-based alternative fuel, whereas other sectors can use alternative fuels to reduce their emissions. The governments of many nations, including the United States, oppose the European Union’s proposal to unilaterally include international aviation in its emissions trading system because the proposed approach is not consistent with ICAO guidance. Furthermore, such an approach could be inconsistent with international aviation agreements and may not be enforceable. According to FAA, the EU’s inclusion of aviation in its emissions trading scheme violates the Chicago Convention on International Civil Aviation and other international agreements. FAA further notes that the EU proposal ignores differences in the U.S. and EU aviation systems and ignores a performance-based approach in which countries decide which measures are most appropriate for goals on emissions. We are currently undertaking for this Subcommittee a study of the EU emissions trading system and its potential impact on U.S. airlines, and other issues relating to aviation and climate change. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or by email at [email protected]. Individuals making key contributions to this testimony include Ed Laughlin, Lauren Calhoun, Bess Eisenstadt, Jim Geibel, Rosa Leung, Josh Ormond, Richard Scott, and Larry Thomas. Lung function impairment, effects on exercise performance, increased airway responsiveness, increased susceptibility to respiratory infection, increased hospital admissions and emergency room visits, pulmonary inflammation, and lung structure damage (long term). Results from animal studies indicate that repeated exposure to high levels of ozone for several months or more can produce permanent structural damage in the lungs. Ozone is also responsible for several billion dollars of agricultural crop yield loss in the United States each year. Carbon monoxide Most serious for those who suffer from cardiovascular disease. Healthy individuals are also affected, but only at higher levels of exposure. Exposure to elevated carbon monoxide levels is associated with visual impairment, reduced work capacity, reduced manual dexterity, poor learning ability, and difficulty in performing complex tasks. Adverse health effects on animals similar to effects on humans. Lung irritation and lower resistance to respiratory infections. Acid rain, visibility degradation, particle formation. Contributes toward ozone formation, and acts as a greenhouse gas in the atmosphere and, therefore, may contribute to climate change. Particulate matter Effects on breathing and respiratory systems, damage to lung tissue, cancer, and premature death. The elderly, children, and people with chronic lung disease, influenza, or asthma, tend to be especially sensitive to the effects of particulate matter. Visibility degradation, damage to monuments and buildings, safety concerns for aircraft from reduced visibility. . Eye and respiratory tract irritation, headaches, dizziness, visual disorders, and memory impairment. Contribute to ozone formation, odors, and have some damaging effect on buildings and plants. Act as greenhouse gases in the atmosphere and, therefore, may contribute to climate change. Contrails and contrail-induced clouds produce warming effect regionally where aircraft fly. Breathing, respiratory illness, alterations in pulmonary defenses, and aggravation of existing cardiovascular disease. Together, sulfur dioxide and nitrogen oxides are the major precursors to acid rain, which is associated with the acidification of lakes and streams, accelerated corrosion of buildings and monuments, and reduced visibility. Safety management procedures that can predict, rather than respond to, safety risks, in a high density, complex operating environment; research to support safety analysis, development of advanced materials for continued airworthiness of aircraft, aircraft system and equipage management; and adaptive aircraft control systems to allow the crew and aircraft to recover from unsafe conditions. Under its Aviation Safety program, NASA research supports development of Safety Management Systems to provide a systematic approach to manage safety risks; integrates prediction and mitigation of risks prior to aircraft accidents or incidents; and shares safety-related information through programs such as the Aviation Safety Analysis and Information Sharing program. Improved air traffic management technologies to manage airspace configuration, support increases in volume and complexity of traffic demands, mitigate weather impacts, and maintain safe and efficient operations at airports, decrease runway incursions, and address wake vortex issues. Under its Airspace Systems program, NASA research supports development of variable separation standards based on aircraft performance levels in the en route environment; trajectory-based operations, traffic spacing, merging, metering, flexible terminal airspace, and expanded airport access; technologies and procedures for safe runway procedures in low-visibility conditions; coordinated arrival/departure management; and mitigation of weather and wake vortex issues. Management of aviation growth to meet the complexity of operations within the NextGen environment, regulation and certification of new manned and unmanned aircraft, and management of operations in an environmentally sound manner. Under its Fundamental Aeronautics program, NASA research supports development of improved performance for the next generation of conventional subsonic aircraft, rotorcraft and supersonic aircraft and develops methods for environmental management system to measure and assess reductions in air quality impact, noise, and emissions. Aviation and the Environment: FAA’s and NASA’s Research and Development Plans for Noise Reduction Are Aligned, but the Prospects of Achieving Noise Reduction Goals Are Uncertain. GAO-08-384. Washington, D.C.: February 15, 2008. Aviation and the Environment: Impact of Aviation Noise on Communities Presents Challenges for Airport Operations and Future Growth of the National Airspace System. GAO-08-216T. Washington, D.C.: October 24, 2007. Climate Change: Agencies Should Develop Guidance for Addressing the Effects on Federal Land and Water Resources. GAO-07-863. Washington, D.C.: August 7, 2007. Responses to Questions for the Record; Hearing on the Future of Air Traffic Control Modernization. GAO-07-928R. Washington, D.C.: May 30, 2007. Responses to Questions for the Record; Hearing on JPDO and the Next Generation Air Transportation System: Status and Issues. GAO-07-918R. Washington, D.C.: May 29, 2007. Next Generation Air Transportation System: Status of the Transition to the Future Air Traffic Control System. GAO-07-748T. Washington, D.C.: May 9, 2007. Joint Planning and Development Office: Progress and Key Issues in Planning the Transition to the Next Generation Air Transportation System. GAO-07-693T. Washington, D.C.: March 29, 2007. Next Generation Air Transportation System: Progress and Challenges in Planning and Implementing the Transformation of the National Airspace System. GAO-07-649T. Washington, D.C.: March 22, 2007. Next Generation Air Transportation System: Progress and Challenges Associated with the Transformation of the National Airspace System. GAO-07-25. Washington, D.C.: November 13, 2006. Aviation and the Environment: Strategic Framework Needed to Address Challenges Posed by Aircraft Emissions. GAO-03-252. Washington, D.C.: February 28, 2003. Aviation and the Environment: Transition to Quieter Aircraft Occurred as Planned, but Concerns about Noise Persist. GAO-01-1053. Washington, D.C.: September 28, 2001. Aviation and the Environment: Aviation’s Effects on the Global Atmosphere Are Potentially Significant and Expected to Grow. GAO/RCED-00-57. Washington, D.C.: February 18, 2000. Aviation and the Environment: Results from a Survey of the Nation’s 50 Busiest Airports. GAO/RCED-00-222. Washington, D.C.: August 30, 2000. Aviation and the Environment: Airport Operations and Future Growth Present Environmental Challenges. GAO/RCED-00-153. Washington, D.C.: August 30, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Collaboration between the federal government and the aviation industry has led to reductions in aviation emissions, but growing air traffic has partially offset these reductions. The Federal Aviation Administration (FAA), together with the National Aeronautics and Space Administration (NASA), the Environmental Protection Agency (EPA), and others, is working to increase the efficiency, safety, and capacity of the national airspace system and at the same time reduce aviation emissions, in part, by transforming the current air traffic control system to the Next Generation Air Transportation System (NextGen). This effort involves new technologies and air traffic procedures that can reduce aviation emissions and incorporates research and development (R&D) on emissions-reduction technologies. Reducing aviation emissions is important both to minimize their adverse health and environmental effects and to alleviate public concerns about them that could constrain the expansion of airport infrastructure and aviation operations needed to meet demand. This testimony addresses (1) the scope and nature of aviation emissions, (2) the status of selected key federal efforts to reduce aviation emissions, and (3) next steps and challenges in reducing aviation emissions. The testimony updates prior GAO work with FAA data, literature reviews, and interviews with agency officials, industry and environmental stakeholders, and selected experts. Aviation contributes a modest but growing proportion of total U.S. emissions, and these emissions contribute to adverse health and environmental effects. Aircraft and airport operations, including those of service and passenger vehicles, emit ozone and other substances that contribute to local air pollution, as well as carbon dioxide and other greenhouse gases that contribute to climate change. EPA estimates that aviation emissions account for less than 1 percent of local air pollution nationwide and about 2.7 percent of U.S. greenhouse gas emissions, but these emissions are expected to grow as air traffic increases. Two key federal efforts, if implemented effectively, can help to reduce aviation emissions--NextGen initiatives in the near term and research and development over the longer term. For example, NextGen technologies and procedures, such as satellite-based navigation systems, should allow for more direct routing, which could improve fuel efficiency and reduce carbon dioxide emissions. Federal research and development efforts--led by FAA and NASA in collaboration with industry and academia--have achieved significant reductions in aircraft emissions through improved aircraft and engine technologies, and federal officials and aviation experts agree that such efforts are the most effective means of achieving further reductions in the longer term. Federal R&D on aviation emissions also focuses on improving the scientific understanding of aviation emissions and developing lower-emitting aviation fuels. Next steps in reducing aviation emissions include managing NextGen initiatives efficiently; deploying NextGen technologies and procedures as soon as practicable to realize their benefits, including lower emissions levels; and managing a decline in R&D funding, in part, by setting priorities for R&D on NextGen and emissions-reduction technologies. Challenges in reducing aviation emissions include designing aircraft that can simultaneously reduce noise and emissions of air pollutants and greenhouse gases; encouraging financially stressed airlines to purchase more fuel-efficient aircraft and emissions-reduction technologies; addressing the impact on airport expansion of more stringent EPA air quality standards and growing public concerns about the effects of aviation emissions; and responding to proposed domestic and international measures for reducing greenhouse gases that could affect the financial solvency and competitiveness of U.S. airlines. |
Originally established as the Sky Marshal program in the 1970s to counter hijackers, the Aviation and Transportation Security Act expanded FAMS’s mission and workforce in response to the September 11, 2001, terrorist attacks and mandated the deployment of federal air marshals on high- security risk flights. Within the 10-month period immediately following September 11, 2001, the number of air marshals grew significantly. Also, during years following the 2001 attacks, FAMS underwent various organizational transfers. Initially, FAMS was transferred within the Department of Transportation from the Federal Aviation Administration to the newly created TSA. In March 2003, FAMS moved, along with TSA, to the newly established Department of Homeland Security. In November 2003, FAMS was transferred to U.S. Immigration and Customs Enforcement (ICE). Then, about 2 years later, FAMS was transferred back to TSA in the fall of 2005. FAMS is one layer among multiple layers of aviation security. For example, prospective passengers are prescreened against applicable records in the Terrorist Screening Center’s consolidated watch list. Passengers and baggage are also physically screened. Air marshals generally are characterized as being the last line of defense within the layered aviation-security framework. In this regard, FAMS officials stressed that air marshals constitute the only in-flight security layer deployed on the basis of risk. FAMS deploys thousands of federal air marshals to a significant number of daily domestic and international flights. In carrying out this core mission of FAMS, air marshals are deployed in teams to various passenger flights. Such deployments are based on FAMS’s concept of operations, which guides the agency in its selection of flights to cover. Once flights are selected for coverage, FAMS officials stated that they must schedule air marshals based on their availability, the logistics of getting individual air marshals in position to make a flight, and applicable workday rules. At times, air marshals may have ground-based assignments. On a short- term basis, for example, air marshals participate in Visible Intermodal Prevention and Response (VIPR) teams, which provide security nationwide for mass transit systems other than aviation. Also, air marshals participate in Joint Terrorism Task Forces led by the Federal Bureau of Investigation. Good marksmanship is considered a necessity for air marshals, particularly given the unique environment of the core mission—the relatively tight confines of an airplane, coupled with the presence of numerous passengers (“bystanders”) and the possibility of air turbulence that creates an unstable “shooting platform” (see fig. 1). Thus, according to TSA, air marshals have the highest marksmanship standard in the federal government and also must be recertified on their firearm every quarter. To preserve their anonymity on covered flights, air marshals are to blend in with other passengers by dressing appropriately and performing their duties discreetly without drawing undue attention. FAMS’s operational approach (concept of operations) is based on risk- related factors, such as assessments of threat, vulnerability, and consequences. FAMS is guided by the provisions of the Aviation and Transportation Security Act that specify the deployment of federal air marshals on flights presenting high-security risks, such as nonstop, long- distance flights targeted on September 11, 2001. FAMS seeks to maximize coverage of high-risk flights by establishing coverage goals for 10 targeted critical flight categories. In order to reach these coverage goals, FAMS uses a scheduling process to determine the most efficient flight combinations that will allow air marshals to cover the desired flights. FAMS management officials stressed, however, that the overall coverage goals and the corresponding flight schedules of air marshals are subject to modification at any time based on changing threat information and intelligence. Following the attacks of September 11, 2001, FAMS developed a risk- based concept of operations for deploying air marshals on U.S. commercial passenger air carriers. Because there are many more U.S. air carrier flights each day than can be covered by air marshals, FAMS relies on the methodology outlined in its concept of operations to assign air marshals to flights with the highest security risks. Under this approach, FAMS considers the following risk-related factors to categorize each of the approximately 29,000 domestic and international flights operated daily by U.S. commercial passenger air carriers into risk categories—high risk or lower risk: Threat (intelligence): Available strategic or tactical information affecting aviation security is considered. Vulnerabilities: Although FAMS’s specific definition is deemed to be sensitive security information, DHS defines “vulnerability” as a physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given hazard. Consequences: FAMS recognizes that flight routes over certain geographic locations involve more potential consequences than other routes. FAMS attempts to assign air marshals to provide an on-board security presence on as many of the flights in the high-risk category as possible. However, other considerations can make covering only high-risk flights impractical from a scheduling perspective and potentially predictable to an adversary. Therefore, for purposes of scheduling efficiency and adversary uncertainty, FAMS may deploy some air marshals on lower-risk flights. FAMS has established a scheduling process intended to maximize the coverage of high-risk flights and meet the agency’s desired coverage goals for 10 targeted critical flight categories. FAMS’s Domestic Planning Branch (within the Systems Operation Control Division) is responsible for scheduling air marshals to domestic missions. During the course of a year, the Domestic Planning Branch must prepare schedules for 13 roster periods of 28 days each. According to FAMS officials, each 28-day schedule takes approximately 3 weeks to prepare. The Domestic Planning Branch prepares each domestic schedule using an automated scheduling tool. As part of the scheduling process, each FAMS field office is responsible for making available a specific percentage of their air marshals on a daily basis to cover targeted critical flights (both domestic and international flights) in the roster periods. FAMS utilizes the automated scheduling tool to determine the most efficient flight “pairings” of departure and return flights that will bring an air marshal back to his or her starting point and that would be within the parameters for mission assignment and rest for the air marshal. FAMS officials also perform other checks on the fairness or appropriateness of the schedules, such as ensuring that certain flights are not being covered repeatedly by the same air marshals. FAMS officials noted that the schedules for deploying air marshals are altered as needed to cover specific, high-threat flights. For example, in August 2006, FAMS increased its coverage of international flights in response to the discovery, by authorities in the United Kingdom, of specific terrorist threats directed at flights from Europe to the United States. However, the officials added that a shift in resources of this type can have consequences because of the limited number of air marshals. FAMS officials noted that international missions require more resources than domestic missions partly because the trips are of longer duration. In its 2003 PART review of FAMS, OMB concluded that an independent evaluation should be conducted to assess FAMS’s performance related to aspects of the agency’s concept of operations—particularly aspects involving flight coverage risk categories, the distribution of covered flights, and target levels of coverage. The Homeland Security Institute, a federally funded research and development center, performed this evaluation and issued a final report in July 2006. The report concluded that FAMS’s approach for achieving its core mission of providing an onboard security presence for flights, as detailed in the agency’s concept of operations, was reasonable and made several recommendations for enhancements. The Homeland Security Institute recommended, for example, that FAMS increase randomness or unpredictability in selecting flights and otherwise diversify the coverage of flights within various risk categories. As of October 2008, FAMS had implemented or had ongoing efforts to implement the recommended enhancements. In a July 2006 report, the Homeland Security Institute specifically noted the following regarding FAMS’s overall approach to flight coverage: FAMS applies a structured, rigorous approach to analyzing risk and allocating resources. The approach is reasonable and valid. No other organizations facing comparable risk-management challenges apply notably better methodologies or tools. As part of its evaluation methodology, the Homeland Security Institute examined the conceptual basis for FAMS’s approach to risk analysis. Also, the institute examined FAMS’s scheduling processes and analyzed outputs in the form of “coverage” data reflecting when and where air marshals were deployed on flights. Further, the Homeland Security Institute developed and used a model to study the implications of alternative strategies for assigning resources. We reviewed the Homeland Security Institute’s evaluation methodology and generally found it to be reasonable. In a 2008 PART reassessment of FAMS, OMB also reported that the Homeland Security Institute’s evaluation employed quality evaluation methods and was comprehensive in scope. Further, OMB noted that an interagency steering group—which was convened by the Homeland Security Institute and met in conference in April 2006—also had reviewed FAMS’s concept of operations and considered it to be reasonable. In addition to FAMS and Homeland Security Institute participants, the interagency steering group consisted of representatives from various law enforcement and counterterrorism agencies, which included the Federal Bureau of Investigation, U.S. Customs and Border Protection, the Transportation Security Administration, the Federal Aviation Administration, the Homeland Infrastructure Threat and Risk Assessment Center, U.S. Northern Command/North American Aerospace Defense command, and the National Counterterrorism Center. In its July 2006 report, the Homeland Security Institute made several recommendations for enhancing FAMS’s approach for deploying air marshals on flights. As presented in table 1, FAMS had implemented or had ongoing efforts to implement all of the recommended enhancements, as of October 2008. In reference to the core mission of FAMS, the Homeland Security Institute’s recommendations regarding two processes—the filtering process for selecting flights and the allocation process for assigning air marshals to flights—are particularly important. To address the institute’s recommendations, FAMS officials stated that a broader approach to filtering flights has been implemented—an approach that opens up more flights for potential coverage, provides more diversity and randomness in flight coverage, and extends flight coverage to a variety of airports. The Homeland Security Institute’s ongoing work has also resulted in two reports delivered to FAMS in July 2008. One of the reports detailed the institute’s analysis regarding requirements for a next-generation-mission scheduling tool for FAMS, and the other report presented the institute’s benchmark analysis that compared FAMS’s workday rules and practices against those of similar occupations involving frequent air travel and the related operational challenges, including fatigue and other human factors. Also, in September 2008, the Homeland Security Institute provided FAMS a classified report assessing the deterrent effects of the agency’s approach to flight coverage. Further, based on its continuing work, the institute expects to provide FAMS one additional final report by the end of calendar year 2008—a report regarding potential enhancements to performance measures. To identify and address issues affecting the ability of its workforce to successfully carry out its mission, FAMS has implemented various communication-oriented processes or initiatives—including 36 issue- specific working groups—that have produced some positive results. For instance, FAMS has revised and documented certain policies—including the policy related to aircraft check-in and boarding procedures—to better protect air marshals’ anonymity. In addition, FAMS has modified its mission scheduling processes and implemented a voluntary lateral transfer program to address certain issues regarding air marshals’ quality of life— and has plans to further address health issues associated with varying work schedules and frequent flying. As an additional initiative to help determine the effectiveness of management’s actions to address issues affecting air marshals, FAMS conducted a workforce satisfaction survey of all staff in late fiscal year 2007. A majority (79 percent) of the respondents to the survey indicated that there had been positive changes from the prior year, although the overall response rate (46 percent) constituted less than half of the workforce. The 46 percent response rate was substantially less than the 80 percent rate encouraged by OMB in its guidance for federal surveys that require its approval. According to the OMB guidance, a high response rate increases the likelihood that the views of the target population are reflected in the survey results. Obtaining a higher response rate to FAMS’s future surveys, which the agency plans to conduct every 2 years, and modifying the structure of some questions, could enhance the surveys’ potential usefulness by, for instance, providing a more comprehensive basis for assessing employees’ attitudes and perspectives. All 67 of the air marshals we interviewed in 11 field offices attributed progress under these efforts largely to the “tone at the top,” particularly the commitment exhibited by the former FAMS Director who served in his position from March 2006 to June 2008. To reinforce a shared vision for workforce improvements and sustain forward progress, the current FAMS Director has expressed a commitment to continuing applicable processes and initiatives. Our prior work has shown that leading organizations commonly sought their employees’ input on a periodic basis—by, for example, establishing working groups or task forces, convening focus groups, and conducting employee satisfaction surveys—and used that input to adjust their human capital approaches. Starting in March 2006, the then-serving FAMS Director implemented several communication processes or initiatives to better understand and address issues facing the agency’s workforce. Chief among these processes or initiatives were issue-specific working groups established to study, analyze, and address a variety of issues ranging from mission, organizational, and operational topics to workforce satisfaction and quality-of-life concerns. Initially, based on his knowledge of issues facing the organization when he assumed the leadership position in March 2006, the FAMS Director established 12 working groups. Subsequently, based on feedback from these initial groups and other sources regarding issues of concern, the number of working groups expanded to 36 (see app. V). Each working group typically included a special agent-in-charge, a subject matter expert, air marshals, and mission support personnel from the field and headquarters. FAMS management directed working group members to define each group’s purpose, analyze specific issues, develop short- and long-term recommendations and determine their financial feasibility. As a final product, FAMS management expected each working group to submit a report, including recommendations, to the FAMS executive staff for managerial consideration. According to FAMS management, the working groups typically disband after submitting a final report. FAMS management stressed, however, that applicable groups could be reconvened or new groups established as needed to address relevant issues. In addition to the working groups, other processes or initiatives implemented by FAMS management to address workforce issues or otherwise improve management-workforce communication include the following: Field office focus groups—Each of the 21 FAMS field offices organized a local focus group composed of representatives from the respective office’s air marshal squads and at least one mission support staff. All members serve on a rotating basis, and the groups are to meet at least quarterly to discuss issues of concern to the local workforce and bring these issues to the attention of the applicable field office’s special agent-in-charge. Field office visits by the FAMS Director— In 2006, the FAMS Director began visiting field offices and holding informal gatherings with air marshals, outside the presence of local managers, to discuss their questions and concerns. Listening sessions—FAMS senior management established forums to allow direct communication between FAMS senior management and various personnel. In 2006, the FAMS Director and Deputy Directors conducted these sessions weekly in headquarters and the field offices with a total of 10 to14 staff selected for each meeting. In 2007, this format changed from weekly to monthly sessions and included larger groups of FAMS personnel. Dinners with the Director—In 2006, the FAMS Director began holding weekly dinners to meet with air marshals transiting through the Washington, D.C., area. These dinners provide an opportunity for air marshals to speak personally with the director about any questions or concerns. The FAMS Deputy Director and one assistant director also attend these dinners with selected air marshals. Director’s e-mail in-box—FAMS established an e-mail in-box for agency personnel to provide feedback to the FAMS Director. At any time, air marshals—whether at headquarters, in a field office, or deployed on mission—can send their insights, ideas, suggestions, and solutions to the FAMS Director. Anonymous Web site—FAMS established an internal Web site for agency personnel to provide anonymous feedback to FAMS management on any topic. Ombudsman Position—FAMS management assigned an air marshal to the position of Ombudsman in October 2006. According to FAMS management, the Ombudsman provides confidential, informal, and neutral assistance to employees to address workplace-related problems, issues, and concerns. FAMS reported that, in fiscal year 2007 (the first full year of the position), the Ombudsman handled 67 cases, and, through the first three quarters of fiscal year 2008, an additional 54 cases. FAMS officials estimated that, as of October 2008, more than one-fourth of the agency’s employees had participated in one or more of these activities which encompass the various working groups and other processes and initiatives. Based on input provided by the working groups and information obtained through the other processes and initiatives, FAMS has taken or is planning to take actions to address issues that affect the ability of air marshals to carry out the agency’s mission. As discussed in the following sections, these actions address operational issues, such as check-in and boarding procedures that affect air marshals’ anonymity as well as quality-of-life and health issues. To preserve their anonymity on covered flights, federal air marshals are to blend in with other passengers by dressing appropriately and performing their duties discreetly without drawing undue attention. In past years, air marshals frequently asserted that the check-in and boarding policy and procedures established by FAMS compromised their anonymity by requiring repeated interactions with airline personnel. In September 2005, we reported that the full extent of incidents that air marshals encounter was unknown because FAMS lacked adequate management controls for ensuring that such incidents were recorded, tracked, and addressed. Accordingly, to facilitate management of incidents that affect air marshals’ ability to operate discreetly during their missions, our September 2005 report recommended that FAMS take the following four actions: Develop a means for recording all incidents reported to the Mission Operations Center that affect air marshals’ ability to operate discreetly and criteria for determining which incidents require federal air marshals to complete a mission report. Develop a means for tracking and retrieving data on mission reports to enable FAMS to analyze and monitor reported and systemic incidents. Establish written policies and procedures for reviewing and addressing reported incidents. Establish a means for providing feedback on the status and outcome of FAMS mission reports to the federal air marshals who submit them. FAMS has taken steps to address all four of these recommendations and also address the related feedback received from air marshals through various working groups and other initiatives. In October 2005, FAMS issued a written directive establishing policies and procedures for reporting and managing mission incidents. In November 2005, we reported that we had reviewed the directive and believed that it addressed two of our recommendations—the first and the third recommendations. More recently, in March 2008, FAMS issued an addendum to its written directive establishing a means for providing feedback on the status and outcome of FAMS mission reports to the federal air marshals who submit them (fourth recommendation). Additionally, FAMS revised its policy and procedures regarding interaction with airline personnel during the check-in and boarding processes in order to better protect the anonymity of air marshals in mission status. To help ensure effective implementation, the new policy and procedures have been incorporated into TSA’s Aircraft Operator Standard Security Program, which specifies requirements that domestic passenger air carriers must implement as part of their TSA-approved security programs. According to FAMS officials, the recent update constitutes the first time that the Aircraft Operator Standard Security Program guidance specifically includes a section regarding the boarding of federal air marshals. Through use of a database created in fiscal year 2006 to track mission incidents, FAMS senior executive staff noted that analysis and monitoring are conducted daily of reported incidents, including those that could compromise the ability of air marshals to operate discreetly (second recommendation). The first management report detailing overall incident patterns and trends was produced in July 2008. Going forward, FAMS officials stated that reports would be produced quarterly to allow management to review patterns or trends regarding mission incidents and the effectiveness of the new policy and procedures. To further protect the anonymity of air marshals while on missions, and in response to air marshals’ feedback and the working groups’ recommendations, FAMS management revised the dress code policy and the hotel policy for air marshals in August 2006 and February 2007, respectively. The revisions allow air marshals greater discretion in selecting appropriate attire to wear on missions and choosing hotels for overnight trips. Before the revisions, air marshals reported that the dress code policy was too restrictive and forced them to dress too formally for certain flights, such as those to vacation-oriented destinations. According to the air marshals, this restrictive policy resulted in their standing out from the other passengers, a situation that compromised their anonymity. Similarly, before being revised, FAMS’s hotel policy directed air marshals to stay at certain hotels on overnight missions so that they could be located easily by management in an emergency. Additional considerations of FAMS management for restricting the hotel selection were to ensure that air marshals were able to stay at hotels within per diem rates and would have ready access to transportation between the hotel and the airport. Air marshals expressed concerns that repeatedly staying at the same hotels risked exposing their anonymity. The revised policy allows air marshals to select their own hotels, provided the hotels are within per diem rates and have adequate transportation options. To alleviate concerns of FAMS management about being able to contact air marshals in an emergency, the revised policy requires air marshals to report their hotel locations via the FAMS intranet. All 67 of the air marshals we interviewed in the 11 field offices we visited said that the revised dress code and hotel policies adequately addressed their concerns. FAMS has described the agency’s personal digital assistant (PDA) communication device as being a lifeline for air marshals. The current device carried by air marshals is intended to function as a cell phone and personal computer and allow users to place phone calls, access the Internet, send e-mails, pull up basic Microsoft Word documents, store documents, and submit reports. However, the findings of FAMS’s applicable working groups indicated that the current PDA communication device has proven unreliable. Similarly, all 67 of the air marshals we interviewed in 11 field offices stated that they had experienced problems with their PDA device while on missions. Examples of problems reported by air marshals included dropped calls or lost signals in certain geographical areas, limited audio quality and durability, and lack of ability to send certain required documents (such as time and attendance reports). Another reported problem was the frequent freezing or locking of the PDA device, which then necessitated use of a cumbersome reset process. As a result of such problems, air marshals reported that the PDA device has hindered their ability to communicate effectively with management while in mission status. Additionally, the air marshals we interviewed commented that the current PDA device is relatively large and bulky, which potentially contributes to loss of anonymity. In response to air marshals’ feedback and the working groups’ recommendations, FAMS is taking steps to procure new PDA communication devices and distribute them to air marshals. Furthermore, according to FAMS officials, the procurement contract for the new PDA devices will provide for a 2-year replacement cycle. In the interim, to improve voice communication capabilities pending arrival of the new devices, FAMS officials reported that it issued new cell phones to air marshals in June 2008. The officials noted, however, that air marshals still must rely on the current PDA device for non-voice functions, such as sending and receiving e-mail messages and documents, until the new PDA devices are available. In reference to quality-of-life and health issues, mission scheduling constitutes the most significant concern of air marshals, according to feedback that FAMS management received from working groups and other communication processes and initiatives. To be fully effective, air marshals must be healthy, fit, and alert. However, FAMS’s Medical Issues Working Group reported that air marshals have experienced various types of health issues—poor physical fitness as well as musculoskeletal injuries and upper respiratory infections—that may potentially be attributable to frequent flying and the overall nature of their jobs. The working group noted various challenges to ensuring that air marshals have adequate sleep, exercise, and recovery time. A contributing factor noted is that the agency’s automated scheduling tool historically has lacked the capability to consistently program an air marshal’s daily start and end times throughout a roster period, which makes normal sleep patterns difficult to maintain and often results in fatigue. For instance, an air marshal may have been scheduled to begin some days at 5 a.m. and other days at 10 a.m., with unpredictable ending times because of flight delays. In addition to inconsistent shifts, the Medical Issues Working Group noted that air marshals are subject to long hours—including arriving home late on a Friday and then having to depart early the following Monday morning. These types of schedules, according to the working group, make allowing adequate time for workouts and maintaining healthy eating habits difficult and also limit the amount of time available to take care of family and personal needs. To address these scheduling issues, FAMS has implemented or is planning to implement various changes: Mission exchange program—This program, which FAMS initially piloted in 2006 and is now available to all 21 field offices, allows air marshals within the respective field office to exchange mission days based on a demonstrated need, such as medical issues or family- related issues. For instance, an air marshal with an 8 a.m. mission start time and a 9 a.m. medical appointment could exchange shifts with another air marshal for a later mission start time. The program is intended to reduce the amount of unscheduled leave taken by air marshals and otherwise mitigate the hardships or other effects associated with FAMS’s current policy of requiring air marshals to submit requests for annual leave 38 to 66 days in advance. Preset ending time and 60-hour rule—In September 2006, FAMS instituted a change to its mission-scheduling policy. The change is designed to help ensure that air marshals complete their mission flights by a preset time on the day before a regular day off (or the day before scheduled annual leave) and not begin a new mission until receiving a minimum of 60 hours rest. For example, if an air marshal’s regular days off are Saturday and Sunday, and this individual’s mission ended on Friday evening, the next mission assignment (on Monday) would begin no earlier than Monday morning. Limit on number of flight days—In April 2007, FAMS implemented another change in mission-scheduling policy designed to distribute flight days equitably and improve the balance between work and personal life for air marshals. Specifically, under the new policy, each air marshal’s total flight days are targeted to not exceed 18 days per roster period and 200 days annually. More rest time after completing extended international missions—Also in April 2007, FAMS issued guidance to field offices to make every attempt at increasing rest time for air marshals after completing an extended international mission. Under this guidance, air marshals returning from an international mission are to be given a non-flight day as their next duty day when any one of the following three conditions apply: (1) the return flight exceeds 10 hours in the air, (2) the flight crossed the international date line, or (3) the overall mission (round- trip flights plus overnight stays) was 4 days or longer in duration. Depending on an air marshal’s schedule, a non-flight day could be a training day, a regular day off, or a non-mission status day. More consistent start times—FAMS is currently developing a modification to its scheduling tool to provide a consistent, defined scheduling window (encompassing, for example, 3 hours) for air marshals in mission status to report for duty during a 7-day period. Under the planned modification, for instance, FAMS schedulers would assign an air marshal to flights departing during 5:00 a.m. to 8:00 a.m., 9:00 a.m. to 12:00 p.m., or other 3-hour window during the week. FAMS officials stated that this modification, which is intended to provide more consistent start times for each air marshal throughout the applicable week, should be completed and ready for pilot testing by the middle of calendar year 2009. The 67 air marshals we interviewed in 11 field offices generally expressed satisfaction with the various enhancements to mission scheduling, although most (43) mentioned that implementation of the mission exchange program was still evolving. To more specifically address the health implications of flight scheduling, several efforts were recently completed or are planned. For instance, the Homeland Security Institute conducted a benchmark analysis and assessment of fatigue issues related to air marshals and issued a report to FAMS in July 2008. In its analysis, the institute compared FAMS’s workday rules against other occupations—largely in the aviation realm— that face challenges involving frequent travel, jet lag, long work hours, rotating shifts, and the stress of maintaining a schedule across multiple flights and airports. The Homeland Security Institute noted that although no other occupation is identical to that of air marshals, meaningful comparisons were made with similar occupations, such as commercial airline pilots and cargo pilots and law enforcement officers working in aviation (e.g., U.S. Marshals Service aviation enforcement officers responsible for transporting prisoners). In its July 2008 report, the Homeland Security Institute noted that while stress and fatigue issues are a part of all organizations and cannot be entirely eliminated, air marshals are provided considerable blocks of rest within their schedules, when assessed against similar occupations. Overall, the institute reported that the results of the benchmark analysis showed that air marshals are provided above-average time to recuperate from duty days. Further, the institute noted that FAMS has taken various steps, including implementation of the mission exchange program, to improve aspects of mission scheduling. In addition, in October 2008, FAMS officials informed us that the agency has funded a contract with the National Institute of Justice to implement FAMS-specific research regarding mission scheduling, work-rest cycles, fatigue, and performance. According to FAMS officials, air marshals frequently cited the need for a voluntary lateral transfer program during listening group sessions and dinners with the director. FAMS working groups that examined quality-of- life issues also reported that the agency would benefit from implementing a transfer program for air marshals to express interest in relocating to another field office. Thus, in October 2006, FAMS management implemented a voluntary lateral transfer program. Under the program, an air marshal in good standing may request a transfer for up to three field offices, ranked by order of preference, and FAMS management is to make decisions based on the number of vacancies in each office and the seniority of air marshals who apply for transfer. In December 2006, FAMS announced that 176 air marshals had been selected, during the first phase of the program, for transfer to new offices within 60 days. In the second phase, which occurred in the spring of 2007, FAMS management made transfer offers to 40 air marshals—all of whom accepted. In the third and most recent phase, which occurred in the spring of 2008, FAMS management made offers to 48 air marshals—of whom 45 accepted. FAMS expects to continue offering voluntary transfer opportunities during open seasons in the spring of each year. In late fiscal year 2007, FAMS conducted a workforce satisfaction survey of all staff—not just air marshals—to help determine issues affecting the ability of agency personnel to perform their jobs—and to obtain feedback on the effectiveness of measures already taken by management to address relevant issues. The 2007 survey questionnaire consisted of a total of 60 questions that covered 13 topics—senior leadership; supervisor/management; resources and technology; training and education; career development; policies and procedures; employee involvement and autonomy; rewards and recognition; communication; safety, health, and medical issues; work and family life; organizational commitment; and job satisfaction. According to FAMS management officials, the survey provided useful information on quality-of-life and other issues affecting the ability of air marshals and other agency personnel to perform their jobs. In addition, the officials reported that survey results indicated that employees generally were pleased with the policy changes and other actions implemented by management to address relevant issues. For example, although the 2007 workforce satisfaction survey had an overall response rate (46 percent) that constituted less than half of the FAMS workforce, 79 percent of the respondents indicated that there had been positive changes from the prior year. Regarding future plans, FAMS expects to administer a workforce satisfaction survey every 2 years. FAMS officials stated that a purpose of the initial workforce satisfaction survey was to establish a baseline for use in comparing the results of future surveys. In reviewing the 2007 survey’s implementation and results, we made several observations that are important for enhancing the potential usefulness of future surveys. First, as noted previously, the overall response rate was 46 percent. FAMS officials expressed satisfaction with this response rate given the highly mobile nature of their workforce. The FAMS officials also noted that the 46 percent response rate was similar to the response rates for other federal workforce satisfaction surveys. However, the 46 percent response rate was substantially less than the 80 percent rate OMB encourages for federal surveys that require its approval. Although internal workforce surveys such as the one conducted by FAMS do not require OMB approval, we believe the OMB standards and guidance provide relevant direction on planning, designing, and implementing high- quality surveys—including the need to obtain a high response rate to increase the potential that survey responses will accurately represent the views of the survey population. Specifically, the OMB guidance stipulates that agencies must design surveys to achieve the highest practical rates of response to ensure that the results are representative of the target population and that they can be used with confidence as input for informed decision-making. OMB encourages agencies to obtain at least an 80 percent response rate, and its guidance states that response rates are an important indicator of the potential for nonresponse bias, which could affect the accuracy of a survey’s results. For instance, survey estimates may be biased if the individuals who choose to participate (respondents) differ substantially and systematically in some way from those who choose not to participate (nonrespondents). In general, a higher response rate increases the likelihood that any bias problem is decreased, resulting in the views and characteristics of the target population being more accurately reflected in the survey’s results. Thus, for any federal survey that must be approved by OMB, applicable guidelines stipulate that an analysis for possible nonresponse bias must be conducted if the final response rate is less than 80 percent. Regarding the 46 percent response rate for the 2007 survey, FAMS management reported that an analysis of potential nonresponse bias was conducted by comparing various demographic data provided by the respondents to the FAMS workforce as a whole. Based on the analysis of the available demographic data, FAMS concluded that nonresponse bias did not exist as the respondents were representative of the entire workforce. Although the analysis conducted by FAMS was a useful effort, the potential for a nonresponse bias still exists given that over half of the FAMS workforce did not respond to the survey. As noted previously, concerns about nonresponse bias could be avoided or mitigated by obtaining a higher response rate. FAMS employees were given 3 weeks (August 23 through September 14, 2007) to complete the 2007 workforce satisfaction survey. According to FAMS management, even though all employees (not just nonrespondents) were sent four messages reminding them of the deadline for completing the voluntary survey, the final overall response rate was 46 percent. We believe that other widely acknowledged methods, outlined in OMB guidance, could improve the response rate of future FAMS surveys. These methods include, for example, promoting awareness of the survey through outreach efforts with groups of prospective respondents and extending the cut-off date for responding to the survey. Also, monitoring questionnaire returns and targeting extra follow-up efforts to air marshals in particular field locations that have comparatively low response levels could help. Additional observations we made in reviewing the 2007 workforce satisfaction survey’s questionnaire involve the sentence structure of certain questions and the response options. Generally, any question that combines two or more issues—but does not provide for separate or respective answers—can cause uncertainty about how to respond if the answer to each issue is different. Table 2 lists the seven 2007 workforce satisfaction survey questions that used these types of sentence structures. For instance, regarding the senior leadership of FAMS, question 3 cites two concepts (“visions” and “initiatives”), as well as two actions (“shared” and “supported”) associated with these concepts. However, the response options did not account for the fact that experiences could be different with each of these concepts and actions. Similarly, question 10 addresses the reliability of equipment used by agency personnel and cites four different devices. However, the response options did not account for the fact that experiences could be different with each of these devices. Also, none of the 60 questions in the 2007 workforce satisfaction survey provided for response options such as “not applicable” or “no basis to judge”—responses that would be appropriate when respondents had little or no familiarity with the topic in question. Not providing response options such as “not applicable” or “no basis to judge” could lead to potentially misleading question responses. In the interest of being compliant, respondents might be compelled to give a response, such as “neutral,” to a question when they actually have no opinion due to either non- applicability or lack of familiarity with the question topic. While it might be assumed that all individuals being surveyed should be familiar with the topic of all questions, this might not be the case and will not be known unless the questionnaire contains the relevant response options. For example, question 39 (see app. VI), reads as follows: “I am satisfied that the work-related concerns I address with management are addressed appropriately.” As written, this sentence assumes that every employee has raised work-related concerns with management. If a respondent had never expressed work-related concerns with management, this individual might not know how to respond, given the question’s existing response options. Thus, based on the sentence structure of certain questions and the response options, the results from the 2007 survey may provide an incomplete assessment of employees’ perspectives and attitudes to FAMS management. Regarding our observations on the design of survey questions and response options, FAMS officials stated that limited personnel resources precluded investing more time in development of the survey questionnaire and that the survey had served a useful purpose in providing information on issues of concern to be more fully explored through other communication processes or initiatives. Nonetheless, in developing future survey instruments, designing questions to avoid these types of ambiguities could provide FAMS management with information that is more focused and complete. Although we recognize that FAMS has a variety of other processes and initiatives—in addition to the customer satisfaction survey—for identifying and addressing workforce issues, customer satisfaction surveys can be particularly useful given that they are distributed to all employees and provide for anonymity of respondents. Further, the design considerations that we discussed involve relatively minor technical aspects that could be addressed with a minimal investment of personnel resources. As highlighted in our prior work, agency leaders in best practice organizations view people as an important enabler of agency performance and recognize the need for sustained commitment to strategically manage human capital. In developing approaches to managing the workforce, leaders of best practice agencies seek out the views of employees at all levels. Involving employees in the planning process helps agencies to develop goals and objectives that incorporate frontline insights and perspectives about operations. Further, such involvement can also serve to increase employees’ understanding and acceptance of organization goals and objectives and improve motivation and morale. Our work has shown that leading organizations commonly sought their employees’ input on a periodic basis and used that input to adjust their human capital approaches. Among other means, the organizations collected feedback by convening focus groups, providing opportunities for employees to participate in working groups or task forces, and conducting employee satisfaction surveys. As discussed earlier in this report, FAMS has implemented a variety of processes and initiatives to address workforce issues by soliciting the views of front-line staff across the agency. Several key improvements in FAMS policies and procedures have resulted from these efforts. Among other improvements, for example, FAMS amended its policy for flight check-in and boarding procedures to better ensure the anonymity of air marshals in mission status. Also, the various processes and initiatives have helped to improve agency morale, according to the federal air marshals we interviewed. Moreover, agency officials noted that the processes and initiatives represented a significant commitment in management time and resources. In our view, fostering continued progress in addressing workforce issues at FAMS is important. The current FAMS Director, after being designated in June 2008 to head the agency, issued a broadcast message to all employees, expressing a commitment to continue applicable processes and initiatives, including the working group process, listening sessions, field office visits, and the internal Web site for agency personnel to provide anonymous feedback to management on any topic. More recently, in response to our inquiry, FAMS’s Chief of Staff reported in October 2008 that the various communications processes and initiatives “have become an institutionalized and positive aspect” of the agency’s culture. Also, the Chief of Staff noted that FAMS was in the process of establishing an agencywide national advisory council—with representatives from headquarters and all field offices—to further enhance communication and outreach efforts, promote greater job satisfaction, and improve organizational effectiveness through cooperative problem solving and replication of best practices. Federal air marshals are an important layer of aviation security. Thus, it is incumbent upon FAMS management to have sound management processes in place for identifying and addressing the challenges associated with sustaining the agency’s operations and addressing workforce quality-of-life issues. FAMS, to its credit, has established a number of processes and initiatives—including a workforce satisfaction survey—to address various operational and quality-of-life issues that affect the ability of air marshals and other FAMS personnel to perform their aviation-security mission. Consistent with the human capital practices of leading organizations, the current FAMS Director has expressed a commitment to continuing relevant processes and initiatives for identifying and addressing workforce concerns, maintaining open lines of communications, and sustaining forward progress. Although the workforce satisfaction survey is only one of a number of processes or initiatives used by FAMS to identify and address workforce issues, such surveys play an important role given their agencywide scope and the provision for anonymous responses. A higher response rate and more clearly structured questions and response options could add to the usefulness of this effort. To facilitate continued progress in identifying and addressing issues that affect the ability of FAMS personnel to perform the agency’s aviation- security mission, we recommend that the FAMS Director take appropriate actions to increase the usefulness of the workforce satisfaction surveys that FAMS plans to conduct biennially. Such actions could include, for example, ensuring that the survey questions and the answer options are clearly structured and unambiguous and that additional efforts are considered for obtaining the highest possible response rates. We provided a draft of our restricted report for comment to the Department of Homeland Security and TSA. In November 2008, in written comments, the Department of Homeland Security and TSA agreed with our recommendation and noted that FAMS was in the initial stages of formulating the next workforce satisfaction survey, which included plans to implement the recommendation. Also, the Department of Homeland Security and TSA commented that our key findings and recommendation will facilitate continued progress in identifying and addressing issues that affect the ability of FAMS personnel to perform the agency’s aviation security mission. The full text of the department’s and TSA’s written comments is reprinted in appendix VII. As arranged with your office, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others upon request. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix VIII. This report addresses the following three principal questions: What is the Federal Air Marshal Service’s operational approach for achieving its core mission of providing an onboard security presence for flights operated by U.S. commercial passenger air carriers? To what extent has the Federal Air Marshal Service’s operational approach for achieving its core mission been independently assessed? To what extent does the Federal Air Marshal Service have processes and initiatives in place to address issues that affect the ability of its workforce to carry out its mission? Initially, to obtain contextual and overview perspectives regarding the principal questions, we reviewed information available on the Web sites of relevant federal entities—the Department of Homeland Security, the Transportation Security Administration (TSA), and the Federal Air Marshal Service (FAMS). To obtain additional perspectives regarding FAMS’s mission and operations—and issues affecting its workforce—we conducted a literature search to identify relevant reports or studies and other publicly available information, including news media articles. In particular, we focused on reviewing congressional studies, Inspector General reports, and our previous reports. These included the following: U.S. House of Representatives, Committee on the Judiciary, Plane Clothes: Lack of Anonymity at the Federal Air Marshal Service Compromises Aviation and National Security (Washington, D.C.: May 25, 2006). GAO, Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls, GAO-05-884SU (Washington, D.C.: Sept. 29, 2005). The report is restricted (not available to the public) because it contains sensitive security information. The public version of the report is GAO-06-203 (Nov. 28, 2005). U.S. Department of Homeland Security, Office of Inspector General, Review of Alleged Actions by Transportation Security Administration to Discipline Federal Air Marshals for Talking to the Press, Congress, or the Public, OIG-05-01 (Washington, D.C.: Nov. 2004). GAO, Budget Issues: Reprogramming of Federal Air Marshal Service Funds in Fiscal Year 2003, GAO-04-577R (Washington, D.C.: Mar. 31, 2004). GAO, Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed, GAO-04-242 (Washington, D.C.: Nov. 19, 2003). This report is the public version of a restricted report that we provided to congressional requesters in December 2008. Further details about the scope and methodology of our work regarding each of the three principal questions are presented in the following sections, respectively. In addressing this topic, we reviewed relevant legislation regarding FAMS’s mission and organizational structure. In particular, we reviewed a provision of the Aviation and Transportation Security Act that requires the deployment of federal air marshals on passenger airline flights and specifically requires the deployment of federal air marshals on every flight determined to present high security risks. We analyzed FAMS documentation regarding the agency’s strategy and concept of operations for carrying out its mission. Also, we reviewed the results of an evaluation conducted in 2003 by the Office of Management and Budget (OMB), which utilized its Program Assessment Rating Tool (PART) to assess the management and performance of FAMS and concluded that key aspects of program design needed to be independently assessed. Further, we reviewed the follow-on PART-related reassessment of FAMS that OMB conducted in 2008 (see app. II). We reviewed the July 2006 classified report prepared by the Homeland Security Institute based on its independent evaluation of FAMS’s concept of operations. Our engagement team included a social science analyst and an economist with experience in risk assessment, who used generally accepted social science research standards in reviewing the institute’s report. Also, we interviewed applicable Homeland Security Institute officials to enhance our understanding of the evaluation’s scope, methodology, findings, and recommendations. Based on our review and discussion, we determined this report to be sufficiently reliable for the purposes of our work. Further, we reviewed FAMS documentation—and interviewed the Director of FAMS and other senior officials at the agency’s headquarters—regarding the status of efforts to address recommendations made by the Homeland Security Institute and any related initiatives involving strategic planning and the agency’s concept of operations. We also reviewed two additional Homeland Security Institute reports, which FAMS provided to us in September 2008. One of the reports detailed the institute’s analysis regarding requirements for a next-generation mission scheduling tool for FAMS, and the other report presented the institute’s benchmark analysis that compared FAMS’s workday rules and practices against those of similar occupations involving frequent air travel and the related operational challenges, including fatigue and other human factors. Regarding operational or tactical issues as well as quality-of-life issues that affect the ability of air marshals to carry out the agency’s mission, we reviewed published reports, including our September 2005 report (GAO-05-884SU) as well as news media accounts of relevant issues. We also reviewed FAMS documentation regarding various working groups (see app. V) and other initiatives that FAMS had established to address issues that affect the ability of air marshals to carry out the agency’s mission. In particular, we reviewed the final report (if available) produced by the respective working group. For criteria in reviewing the agency’s documentation regarding these efforts, we drew on our prior work regarding leading organizations and the best practices for strategically managing human capital. Further, we interviewed the Director of FAMS and other senior officials at agency headquarters, and we visited 11 of the agency’s 21 field offices, where we interviewed managers and a total of 67 air marshals. We selected the 11 field offices and the 67 air marshals based on nonprobability sampling. In selecting the 11 field offices, we considered various factors, such as geographic location of the offices and the involvement of local management in agencywide working groups to address issues affecting air marshals. At each of the 11 field offices, we first reviewed available work-related information about individual air marshals, such as their starting dates with FAMS and their involvement in ground-based assignments or any agencywide working groups. Based on these factors, we selected and interviewed 6 to 7 air marshals at each of the 11 field offices. Specifically, we selected 6 air marshals at each of 10 field offices and 7 air marshals at the remaining office. Our selections were made to encompass a variety of experience levels. Also, at each field office, rather than meeting separately with each individual, we conducted the interviews of the selected air marshals in group settings to encourage a wide array of perspectives, whether corroborating or contradictory. We conducted our interviews at the field offices during a 7-month time period, July 2007 through January 2008. Because we selected a nonprobability sample of FAMS field offices to visit and air marshals to interview, the information we obtained in these visits and interviews cannot be generalized either to all 21 field locations or to all air marshals in the offices we visited. However, the visits and interviews provided us a broad overview of issues important to air marshals. We reviewed documentation regarding the implementation and results of a workforce satisfaction survey that FAMS conducted in 2007. Our engagement team, which included social science analysts with extensive survey research experience, reviewed the questionnaire used in the survey for clarity and the related response options for appropriateness (see app. VI). Also, we discussed with FAMS officials the extent to which efforts were made to obtain an overall response rate as high as possible. As criteria to guide our review of the survey results, we used the following OMB guidance: Standards and Guidelines for Statistical Surveys (September 2006). Questions and Answers When Designing Surveys for Information Collections (Jan. 20, 2006). We conducted this performance audit from April 2007 to December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of Management and Budget’s (OMB) Program Assessment Rating Tool (PART) consists of a standard series of 25 questions intended to determine the strengths and weaknesses of federal programs. The 25 total questions cover 4 broad topics—(1) program purpose and design, (2) strategic planning, (3) program management, and (4) program results/accountability. This appendix provides an overview of OMB’s PART-based assessments of the Federal Air Marshal Service (FAMS) conducted in 2003 and 2008. Additionally, for each of the 25 questions used in the PART-based assessments, the appendix compares OMB’s 2003 and 2008 answers and summarizes OMB’s narrative findings (explanation and evidence). Also, when the answers in 2003 and 2008 differed for a particular question, the appendix briefly explains the basis for the respective answer. More detailed information regarding the 2003 and 2008 OMB PART assessments of FAMS can be found on OMB’s Web site: www.ExpectMore.gov. OMB’s PART assessments of federal agencies provide performance ratings that indicate how effectively tax dollars are spent. Following an assessment, OMB assigns an agency one of five possible overall ratings: Effective. Programs rated “effective” set ambitious goals, achieve results, are well-managed, and improve efficiency. Moderately effective. A “moderately effective” rating indicates a program that sets ambitious goals and is well-managed but needs to improve its efficiency or address other problems in the programs’ design or management in order to achieve better results. Adequate. An “adequate” rating describes a program that needs to set more ambitious goals, achieve better results, improve accountability, or strengthen its management practices. Ineffective. An “ineffective” rating indicates a program that fails to use tax dollars effectively and is unable to achieve results because of a lack of clarity regarding the program’s purpose or goals, poor management, or some other significant weakness. Results not demonstrated. A “results not demonstrated” rating indicates that a program has been unable to develop acceptable performance goals or collect data to determine whether it is performing. In OMB’s 2003 PART assessment, FAMS received a rating of “results not demonstrated” because at that time FAMS did not have measurable results. Additionally, OMB cited strategic planning deficiencies that included the absence of baselines, targets, and time frames associated with performance goals and performance measurements. OMB further noted the absence of a second long-term outcome measure, proxy measures with respect to deterrence, and an efficiency measure. In OMB’s 2008 PART assessment, FAMS received a rating of “moderately effective.” Regarding the improved rating, OMB recognized the Homeland Security Institute’s independent evaluation, which endorsed FAMS’s concept of operations. Also, OMB noted that FAMS had addressed other deficiencies by developing the following performance measures: A second long-term outcome measure—the level of public confidence in air marshals’ ability to promote aviation security—which is reflective of FAMS’s purpose. Proxy measures of deterrence, such as air marshals’ average annual rate of accuracy in firearms requalification testing. Efficiency measures, such as (a) cost per flight per air marshal and (b) percentage of air marshals meeting the targeted number of flying days per year. The results of PART’s 25 questions in reference to the 2003 and 2008 assessments of FAMS are presented in tables 3 through 6, specifically: The 5 questions in table 3 cover program purpose and design. The 8 questions in table 4 cover strategic planning. The 7 questions in table 5 cover program management. The 5 questions in table 6 cover program results/accountability. As presented in tables 3 through 6, the narrative discussion (explanation and evidence) is our summary of OMB’s key points. If needed for purposes of clarifying the respective topic or ensuring accuracy, we used additional or alternative wording to summarize OMB’s findings. Also, in a few instances, we updated the information as appropriate. The Homeland Security Institute, a federally funded research and development center, was established to assist the Department of Homeland Security in addressing relevant issues that require scientific, technical, and analytical expertise. The institute—after conducting an evaluation of the Federal Air Marshal Service’s approach for achieving the agency’s core mission of providing an onboard security presence for flights operated by U.S. commercial passenger air carriers—issued its final report in July 2006. This appendix presents quoted excerpts that substantially replicate the executive summary in the Homeland Security Institute’s July 2006 report. “The Federal Air Marshal Service (FAMS) challenge to reduce risk in the aviation domain is daunting. U.S. commercial passenger carriers make roughly 28,000 domestic and international flights each day. These flights canvas the globe and originate, terminate, or fly in proximity to thousands of critical facilities. The FAMS must evaluate which flights it will defend and to what extent. It cannot cover every flight. “In response to an Office of Management and Budget (OMB) direction, the FAMS asked HSI for an independent evaluation of its methods for analyzing risk and allocating resources. In particular, it asked HSI to determine if its risk management processes and the application of its concept of operations (CONOPS) to scheduled commercial flights were valid.” “We defined ‘validation’ as a test of whether or not the FAMS risk management processes and the outcome of those processes are reasonable and consistent externally with stated guidance and internally with its own CONOPS. Our analysis involved three tasks. First, we examined the conceptual basis for the FAMS approach to risk analysis. Second, we examined the FAMS scheduling process and analyzed the output of that process in the form of ‘coverage’ data, i.e., when and where air marshals were deployed on flights. Third, we developed and employed a basic quantitative model to study the implications of alternative strategies for assigning resources.” “Based on our analysis, we find that the FAMS applies a valid approach to analyzing risk and allocating resources. In particular, its approach is reasonable given the scarcity of resources and the guidance it has received. It assesses risk as a function of threat, vulnerability, and consequence and employs a filtering process along with an allocation tool to optimize resource allocation. Moreover, the FAMS seeks to strengthen risk management processes by improving its scheduling tools and analytical techniques. We did not find any other organizations that face a similar challenge and apply significantly better methodologies or tools.” “During our analysis, we identified five issues that the FAMS should address itself or in conjunction with the Transportation Security Administration (TSA), the Department of Homeland Security (DHS), and the broader intelligence and security communities.” “The FAMS definition of vulnerability … is inconsistent with traditional risk-based definitions, which focus on the probability that an attack will succeed. It shifts the focus away from other potential vulnerabilities. We recommend that the FAMS reconsider its approach to vulnerability and engage the aviation security community on this issue.” (The emphasis is in the original.) “The FAMS understanding of consequence and its subsequent ‘filtering’ process … bias its allocation decision. To focus limited resources, the FAMS filters flights according to … . “Guidance in the form of legislation and departmental memoranda following 9/11 directed FAMS to focus on flights that present ‘high security risks.’ But, ultimately, that guidance was ambiguous and could be outdated. These fundamental assumptions concerning risk, on which it allocates resources, warrant interagency review by the broader intelligence and security community.” (The emphasis is in the original.) “The FAMS filtering process defines ‘high risk’ and directs its efforts toward flights fitting those characteristics. Its allocation process—a modified version of the SABRE software used by airlines to schedule flight crews—attempts to cover the maximum number of high risk flights within fixed resources. … The scheduling tool requires manual involvement to recognize and modify scheduling solutions, which may not be consistent with effective risk reduction. “Our analysis of one month of FAMS coverage data reveals … [some concerns.] To compensate for a lack of resources and deny predictability, the FAMS should integrate randomness into its allocations.” (The emphasis is in the original.) “Contrary to the popular use of the term ‘random,’ allocating resources in such a way does not mean choosing them haphazardly or without a plan. The overall probability distribution for a group of comparable aircraft can be chosen based on risk analysis. For instance, the FAMS may choose to cover flights in and out of [a particular geographic region]. But the tactical allocation decision concerning a specific flight must be random and converge around the overall category average over time. A terrorist group may be able to discern the overall category average through effective, long-term surveillance but will never know conclusively whether or not the flight it plans to hijack will be covered on a particular day.” “The FAMS primary performance measure—average coverage rates—can mask weaknesses in coverage patterns. In particular, they can mask a situation in which certain flights within a category of comparable flights are heavily covered while others are rarely if ever covered. Accordingly, the FAMS should develop performance measures to track coverage consistency. One example involves tracking coverage deviation, defined as the average difference between the individual coverage rates of each flight in a comparable category and the overall category coverage rate.” (The emphasis is in the original.) “During the course of our analysis, we noted that FAMS decision makers did not have a tool for evaluating the resource implications of different threat scenarios and alternative coverage schemes. The SABRE scheduler is not flexible enough to support quick-response analyses. The FAMS should build a simple decision-support tool, along the lines of the model we developed based on risk balancing, to facilitate a system- wide view of resource decisions.” (The emphasis is in the original.) “Such a tool would augment, not replace its scheduling tool, by allowing decision makers to look across the entire aviation system and investigate the resource implications of alternative allocation scenarios. In other words, how easily could the FAMS adapt to a different threat environment? Could it significantly increase the number of marshals aboard specific flights? Where might those resources come from? How would these changes affect coverage elsewhere?” In 2003, the Office of Management and Budget (OMB) used its Program Assessment Rating Tool (PART) to assess the management and performance of the Federal Air Marshal Service (FAMS). At that time, a key performance measure for assessing FAMS was based on coverage of targeted critical flights under various risk categories. This measure is still applicable currently; however, its designation has been changed from an output measure to an outcome measure. Moreover, this performance measure—the coverage of targeted critical flights—is now considered a proxy indicator regarding air marshals’ ability to defeat an attempted attack. Also, in further response to the findings of OMB’s 2003 assessment, FAMS established two additional outcome measures, one of which serves as another proxy indicator of air marshals’ ability to defeat an attempted attack: The additional proxy outcome measure is the average annual rate of accuracy in air marshals’ firearms requalification testing. The additional, non-proxy outcome measure is based on a national survey of households to determine the level of public confidence in air marshals’ ability to promote aviation security. These updated measures have been approved by the Department of Homeland Security—and also were approved in 2008 by OMB during its PART-based reassessment of FAMS. An overview of FAMS’s updated performance measures is presented in table 7. In March 2006, the Director of the Federal Air Marshal Service (FAMS) communicated to employees his intention to establish working groups to examine a variety of issues ranging from mission, organizational, and operational topics to workforce satisfaction and quality-of-life concerns. Two months later, in May 2006, the director communicated to FAMS employees that 12 working groups had been established, with each chaired by a field office special agent-in-charge (SAC) and that subject matter experts from the field and headquarters were available to assist in an advisory role. Subsequently, the number of working groups increased to a total of 36. Table 8 categorizes the 36 working groups and briefly summarizes the purposes of each. Also, regarding the status of the 36 working groups as of October 1, 2008, FAMS officials reported the following (see notes to table 8): 18 working groups (table note a): Each of these working groups had completed its work and given a final report to FAMS management. Each report had been reviewed by FAMS executives and then distributed to agency employees via a broadcast message from the FAMS Director. If applicable, the broadcast message also presented management’s responses to any recommendations made by the respective report. 9 working groups (table note b): Each of these working groups had completed its work and given a final report to FAMS management. The reports were undergoing review by FAMS executives. 5 working groups (table note c): Each of these working groups had yet to complete its work and give a final report to FAMS management. 4 working groups (note d): Each of these working groups is to remain ongoing. As such, final reports are not expected to be issued; rather, each group will present its findings when applicable and by appropriate means. Appendix VI: FAMS Workforce Satisfaction Survey On behalf of the Director and the Office of Law Enforcement /FAMS Office of Workforce Planning and Management, thank you for taking the time to complete this survey. The purpose of this survey is to capture information regarding workforce satisfaction at OLE/FAMS. The information obtained by this survey will be used by OLE/FAMS leadership to assess the current levels of workforce satisfaction for the purposes of planning, policy development, and program enhancement. In addition, the data obtained by this survey will be used to evaluate current OLE/FAMS workforce satisfaction initiatives and strategies. Please take the time to carefully complete this survey. Your input will remain confidential and is vital to making OLE/FAMS a premier law enforcement organization. Thank you for your help. For each item, choose the response that best reflects your experience at OLE/FAMS. SA A N D SD SA A N D SD 9. I have adequate equipment, supplies, and materials to accomplish my duties. 10. Generally, the equipment I use (e.g., firearm, computer, cell phone, PDA , etc.) to perform my job works properly. 11. The equipment I use is sufficiently easy to operate. 12. I am generally satisfied with the quality of OLE/FAMS physical facilities (e.g., workspaces, training facilities, 10/30/2008 physical fitness areas, firearms ranges, etc.) 13. I am generally satisfied with the availability of OLE/FAMS physical facilities (e.g., workspaces, training facilities, physical fitness areas, firearms ranges, etc.) SA A N D SD 14. I receive the training I need to do my job. 15. I am satisfied with the frequency/amount of training I receive in my office. 16. Generally, I am satisfied with the content and variety of job-related training I receive in my office. 17. I am satisfied with the continuing education opportunities offered by my job. 18. OLE/FAMS supports continuing education opportunities relevant to my job. 19. My work schedule affords me the opportunity to pursue continuing education. 20. I am encouraged by my supervisors and managers to seek training and educational opportunities. Career Development 21. Overall, I am satisfied with the progress I have made toward my career goals. 22. There are sufficient opportunities for career advancement at OLE/FAMS. 23. In my present position, I have a clearly understood career path. 24. Promotions to supervisory levels in OLE/FAMS are based on merit. 25. In OLE/FAMS, the selection criteria for promotion are clear. Policies & Procedures 26. OLE/FAMS’ written policies support (and do not hinder) mission accomplishment. 27. The local policies and procedures of my office support mission accomplishment. 28. I am able to stay updated and am informed about the latest policies and procedures. Employee Involvement & Autonomy 29. I have effective channels to voice my opinion regarding work-related issues (e.g., working groups, listening sessions, e-mail suggestion box, etc.). 30. I am empowered to use my professional discretion in daily execution of my duties. 31. I am provided sufficient opportunities to participate in important decisions affecting my work. Rewards & Recognition 32. Outstanding performance is recognized in my office. 33. I am satisfied with the promotion practices of OLE/FAMS. 34. In my office, monetary rewards (i.e., cash awards, in- position increases, etc.) are tied to performance. 35. I am generally satisfied with my pay. Communication 36. OLE/FAMS’ policies and procedures are clearly communicated and easy to understand. 37. I am satisfied with communication within my office (e.g., FO , branch, division, directorate). 38. There are mechanisms in place which allow me to freely express my comments, concerns, and suggestions without fear of retaliation. 39. I am satisfied that the work-related concerns I address with management are addressed appropriately. 40. I have enough information to do my job well. Safety, Health & Medical Issues 41. I feel that my job-related stress is manageable. 42. I am generally satisfied with OLE/FAMS programs related to employee safety, health, and wellness. 43. I am satisfied that OLE/FAMS management is concerned for the health and safety of employees and is working continuously to offer improved services. 44. I have been provided information and resources to take personal responsibility for my health and wellness as it relates to my job (e.g., proper diet, fitness, sufficient rest). 45. I feel that medical information relevant to my job is communicated to me. Work & Family Life 46. I am able to effectively balance my work with my personal/family life. 47. My family is supportive of my career with OLE/FAMS. SA A N D SD SA A N D SD SA A N D SD SA A N D SD SA A N D SD 48. Current initiatives (e.g., Voluntary Lateral Transfer Program, new office openings, etc.) have a positive effect on quality of work life/family life. 10/30/2008 49. OLE/FAMS leadership has implemented positive changes affecting scheduling. 50. I am satisfied that OLE/FAMS is exploring initiatives to improve quality of life/family life. 51. I have seen improvement in quality of work life and family life as a result of the recommendations from the Director’s Working Groups. 52. I have seen positive changes made in OLE/FAMS in the last year. Organizational Commitment 53. I am proud to work for OLE/FAMS. 54. I find my values are similar to OLE/FAMS values. 55. I feel a sense of loyalty to OLE/FAMS. 56. I am likely to stay at OLE/FAMS for the next 12 months. Job Satisfaction 57. The work I do is important. 58. I find my work challenging and interesting. 59. Generally speaking, I am very satisfied with my job. 60. I like the kind of work I do (e.g., my current duties and assignment). Comments Section Please use this section to provide more specific information for any of the above questions. Reasons for Staying with OLE/FAMS Indicate the importance of each of the following factors in your reasons for staying with OLE/FAMS. In addition to the contacts named above, Danny Burton and John Hansen (Assistant Directors) and Michael Harmond (Analyst-in-Charge) managed this assignment. David Alexander, Chuck Bausell, Arturo Cornejo, Wendy Dye, Stuart Kaufman, and Courtney Reid made significant contributions to the work. Tom Lombardi provided legal support. Katherine Davis provided assistance in report preparation. | By deploying armed air marshals onboard selected flights, the Federal Air Marshal Service (FAMS), a component of the Transportation Security Administration (TSA), plays a key role in helping to protect approximately 29,000 domestic and international flights operated daily by U.S. air carriers. GAO was asked to examine (1) FAMS's operational approach or "concept of operations" for covering flights, (2) to what extent this operational approach has been independently evaluated, and (3) the processes and initiatives FAMS established to address workforce-related issues. GAO analyzed documented policies and procedures regarding FAMS's operational approach and a July 2006 classified report based on an independent evaluation of that approach. Also, GAO analyzed employee working group reports and other documentation of FAMS's processes and initiatives for addressing workforce-related issues, and interviewed the FAMS Director, other senior officials, and 67 air marshals (selected to reflect a range in levels of experience). This report is the public version of a restricted report (GAO-09-53SU) issued in December 2008. Because the number of air marshals is less than the number of daily flights, FAMS's operational approach is to assign air marshals to selected flights it deems high risk--such as the nonstop, long-distance flights targeted on September 11, 2001. In assigning air marshals, FAMS seeks to maximize coverage of flights in 10 targeted high-risk categories, which are based on consideration of threats, vulnerabilities, and consequences. In July 2006, the Homeland Security Institute, a federally funded research and development center, independently assessed FAMS's operational approach and found it to be reasonable. However, the institute noted that certain types of flights were covered less often than others. The institute recommended that FAMS increase randomness or unpredictability in selecting flights and otherwise diversify the coverage of flights within the various risk categories. As of October 2008, FAMS had taken actions (or had ongoing efforts) to implement the Homeland Security Institute's recommendations. GAO found the institute's evaluation methodology to be reasonable. To address workforce-related issues, FAMS's previous director, who served until June 2008, established a number of processes and initiatives--such as working groups, listening sessions, and an internal Web site--for agency personnel to provide anonymous feedback to management on any topic. These efforts have produced some positive results. For example, FAMS revised its policy for airport check-in and aircraft boarding procedures to help protect the anonymity of air marshals in mission status, and FAMS adjusted its flight scheduling process for air marshals to support a better work-life balance. The air marshals GAO interviewed expressed satisfaction with FAMS efforts to address workforce-related issues. Further, the current FAMS Director, after being designated in June 2008 to head the agency, issued a broadcast message to all employees, expressing a commitment to continue applicable processes and initiatives. Also, FAMS has plans to conduct a workforce satisfaction survey of all employees every 2 years, building upon an initial survey conducted in fiscal year 2007. Although the 2007 survey indicated positive changes since the prior year, it was answered by 46 percent of the workforce, well short of the 80-percent response rate that the Office of Management and Budget (OMB) encourages for ensuring that results reflect the views of the target population. OMB guidance gives steps, such as extending the cut-off date for responding, that could improve the response rate of future surveys. Also, several of the 2007 survey questions were ambiguous, and response options were limited. Addressing these design considerations could enhance future survey results. |
Shortly after the September 11, 2001, terrorist attacks, Congress passed, and the President signed into law, the Aviation and Transportation Security Act, which established TSA and gave the agency responsibility for securing all modes of transportation, including the nation’s civil aviation system, which includes domestic and international commercial aviation operations. In furtherance of its civil aviation security responsibilities, TSA is statutorily required to assess the effectiveness of security measures at foreign airports served by a U.S. air carrier, from which a foreign air carrier serves the United States, that pose a high risk of introducing danger to international air travel, and at other foreign airports deemed appropriate by the Secretary of Homeland Security. This provision of law also identifies measures that the Secretary must take in the event that he or she determines that an airport is not maintaining and carrying out effective security measures based on TSA assessments. See appendix II for a detailed description of the process for taking secretarial actions against a foreign airport. In addition, TSA conducts inspections of U.S. air carriers and foreign air carriers that service the United States from foreign airports pursuant to its authority to ensure that air carriers certified or permitted to operate to, from, or within the United States meet applicable security requirements, including those set forth in an air carrier’s TSA-approved security program. The Secretary of Homeland Security delegated to the Assistant Secretary of TSA the responsibility for conducting foreign airport assessments, but retained responsibility for making the determination that a foreign airport does not maintain and carry out effective security measures. Currently, the Global Compliance Division and Office of International Operations, within TSA’s Office of Global Strategies, are responsible for conducting foreign airport assessments. Table 1 highlights the roles and responsibilities of the TSA positions within these divisions that are responsible for implementing the foreign airport assessment program. TSA assesses the effectiveness of security measures at foreign airports using select aviation security standards and recommended practices adopted by ICAO, a United Nations organization representing 190 countries. ICAO standards and recommended practices address operational issues at an airport, such as ensuring that passengers and baggage are properly screened and that unauthorized individuals do not have access to restricted areas of an airport. ICAO standards and recommended practices also address non-operational issues, such as whether a foreign government has implemented a national civil aviation security program for regulating security procedures at its airports and whether airport officials implementing security controls go through background investigations, are appropriately trained, and are certified according to a foreign government’s national civil aviation security program. ICAO member states have agreed to comply with these standards, and are strongly encouraged to comply with ICAO- recommended practices. The ICAO standards and recommended practices TSA assesses foreign airports against are referred to collectively in this report as ICAO standards or standards. See appendix III for a description of the ICAO standards TSA uses to assess security measures at foreign airports. TSA uses a risk-informed approach to schedule foreign airport assessments by categorizing airports into three tiers. Specifically, Tier 1 airports—airports that are determined to be low risk—are assessed once every 3 years; Tier 2 airports—airports determined to be medium risk— are assessed every 2 years; and Tier 3 airports—those determined to be high risk—are assessed annually. TSA’s assessments of foreign airports are conducted by a team of inspectors, which generally includes one team leader and one team member. According to TSA, it generally takes 3 to 7 days to complete a foreign airport assessment. However, the amount of time required to conduct an assessment varies based on several factors, including the size of the airport, the number of air carrier station inspections to be conducted at the airport, the threat level to civil aviation in the host country, and the amount of time it takes inspectors to travel to and from the airport where the assessment will take place. TSA uses a multistep process to conduct assessments of foreign airports. Specifically, the TSA Representative (TSAR) must obtain approval from the host government to allow TSA to conduct an airport assessment, and schedule the date for the on-site assessment. After conducting an entry briefing with Department of State, host country officials, and airport officials, the team conducts an on-site visit to the airport. During the assessment, the team of inspectors uses several methods to determine a foreign airport’s level of compliance with ICAO standards, including conducting interviews with airport officials, examining documents pertaining to the airport’s security measures, and conducting a physical inspection of the airport. For example, inspectors are to examine the integrity of fences, lighting, and locks by walking the grounds of the airport. Inspectors also make observations on access control procedures, such as looking at employee and vehicle identification methods in secure areas, as well as monitoring passenger and baggage screening procedures in the airport. At the close of an airport assessment, inspectors brief foreign airport and government officials on the results of the assessment. TSA inspectors also prepare a report summarizing their findings on the airport’s overall security posture and security measures, which may contain recommendations for corrective action and must be reviewed by the TSAR, the ROC manager, and TSA headquarters officials. See appendix IV for more information on the multistep process TSA uses to conduct its assessments of foreign airports. Along with conducting airport assessments, the same TSA inspection team also conducts air carrier inspections when visiting a foreign airport to ensure that air carriers are in compliance with TSA security requirements. Both U.S. air carriers and foreign air carriers with service to the United States are subject to inspection. When conducting air carrier inspections, TSA inspectors examine compliance with applicable security requirements, including TSA-approved security programs, emergency amendments to the security programs, and security directives. As in the case of airport assessments, air carrier inspections are conducted by a team of inspectors, which generally includes one team leader and one team member. An inspection of an air carrier typically takes 1 or 2 days, but can take longer depending on the extent of service by the air carrier. Inspection teams may spend several days at a foreign airport inspecting air carriers if there are multiple airlines serving the United States from that location. During an inspection, inspectors are to review applicable security manuals, procedures, and records; interview air carrier station personnel; and observe air carrier employees processing passengers from at least one flight from passenger check-in until the flight departs the gate to ensure that the air carrier is in compliance with applicable requirements. Inspectors evaluate a variety of security measures, such as passenger processing, checked baggage acceptance and control, aircraft security, and passenger screening. If an inspector finds that an air carrier is not complying with applicable security requirements, additional steps are to be taken to record such instances and, in some cases, pursue them with further investigation. If the inspectors report that an airport’s security measures do not meet minimum ICAO standards, particularly critical standards, such as those related to passenger and checked baggage screening and access controls, TSA headquarters officials are to inform the Secretary of Homeland Security. If the Secretary, based on TSA’s airport assessment results, determines that a foreign airport does not maintain and carry out effective security measures, he or she must, after advising the Secretary of State, take secretarial action. See appendix II for a detailed description of the process for taking secretarial actions against a foreign airport. In 2007, we issued a report on TSA ‘s foreign airport assessment program, including the results of TSA’s foreign airport assessments, actions taken and assistance provided by TSA when security deficiencies were identified at foreign airports, TSA oversight of its program, and TSA’s efforts to address challenges in conducting foreign airport assessments. Specifically, we reported that TSA’s oversight of the foreign airport assessment program could be strengthened. For example, TSA did not have adequate controls in place to track whether scheduled assessments and inspections were actually conducted, deferred, or canceled. TSA also did not always document foreign officials’ progress in addressing security deficiencies identified by TSA. Further, TSA did not have outcome-based performance measures to assess the impact of its assessments on the security of U.S.-bound flights. As a result, we recommended that TSA develop controls for tracking and documenting information and establish outcome-based performance measures to strengthen oversight of its foreign airport and air carrier evaluation programs. DHS concurred with the recommendations and has since taken several actions to address them, which we discuss later in our report. Since 2007, TSA has taken a number of steps to update and streamline its foreign airport assessment program, as discussed below. TSA revised and updated its Standard Operating Procedures (SOP) for the program. In 2010, TSA revised the SOP, which prescribes program and operational guidance for assessing security measures at foreign airports. TSA also streamlined the assessment process by reducing the number of ICAO standards it assesses foreign airports against from 86 to 40. Of the 40, TSA officials we interviewed told us the agency has identified 22 standards as key for determining an airport’s level of security. In addition, TSA reduced the assessment report writing cycle time for inspectors from 38 calendar days to 20 calendar days, which was intended to expedite the delivery of assessment reports to host governments. This new requirement has helped TSA reduce the time needed to deliver its assessment results to foreign countries, but all 23 inspectors we interviewed told us this requirement was often difficult to meet due to a variety of factors. For example, upon returning from a visit, TSA inspectors reported that they need to document both the airport assessment and air carrier inspections, and plan their next trip, which makes the reduced reporting time requirement difficult to meet. However, the Director of Global Compliance told us that for larger airports with many air carriers, TSA recently began separating the airport assessment and air carrier inspection visits into two separate visits, thus reducing the documentation workload. Moreover, the deadline to submit documentation has been delayed for some back-to-back assessment trips in order to provide sufficient time for inspectors to complete the documentation. The Director of Global Compliance also stated that, in fiscal year 2012, all employees will have training opportunities in order to improve writing skills and reduce the amount of time dedicated to editing and rewriting assessments. In addition, to address resource needs we identified in 2007, TSA hired 6 additional international inspectors in 2007 and 10 international cargo inspectors in 2008 and created 25 new international inspector positions, of which 15 were filled as of July 2011. TSA plans to fill the remaining 10 positions by the end of 2011. The Director of Global Compliance stated that the burden of writing and processing assessment reports should be lessened as the agency hires additional inspectors because this will create a greater pool of available inspectors to conduct and document the assessments. TSA implemented a new risk-informed methodology for prioritizing and scheduling its assessments at foreign airports in 2010. Specifically, TSA now categorizes foreign airports as high, medium or low risk. Of the roughly 300 foreign airports TSA assesses, TSA identified some airports as high risk and others as medium risk as of August, 2011. The remaining airports were deemed low risk. TSA’s methodology for determining an airport’s risk category is based on the likelihood of a location being targeted (threat), the protective measures in place at that location (vulnerability), and the potential impact of an attack on the international transportation system (consequence). TSA uses current threat information, airport passenger and flight data, and prior airport assessment results to assign each airport a numerical risk score, which is then used to determine its overall risk ranking. As part of this calculation, TSA assigns each airport an overall vulnerability score of 1–5. These scores, or categories, are numerical representations of compliance or noncompliance with the ICAO standards the agency assesses each foreign airport against. Specifically, using an airport’s most recent assessment report, the ROC Manager and TSA’s Director of Global Compliance assign an overall vulnerability category for each airport based on the following descriptions provided in the 2010 Foreign Airport Assessment Program SOP: Category 1: Fully Compliant; Category 2: Capability Exists with Minor Episodes of Noncompliance; Category 3: Capability Exists, Compliance is Generally Noted, Category 4: Capability Exists, Serious Lack of Implementation Category 5: Egregious Noncompliance. Once the vulnerability score is determined, it is then combined with each airport’s related threat and consequence information to determine its risk category. TSA attempts to assess high-risk airports every year, medium- risk airports once every 2 years, and low-risk airports once every 3 years. TSA’s Director of Global Compliance told us this new approach allows the agency to better allocate resources to identify and mitigate security concerns at foreign airports it assesses. In addition, all the TSA ROC managers and 19 of the 23 inspectors we interviewed during our site visits told us that this new foreign airport risk prioritization methodology was an improvement over the previous process. These officials also stated that this new approach has helped them reduce the number of assessments conducted annually, enabling inspectors to better adhere to the annual schedule. On the basis of our analysis, TSA’s approach for scheduling foreign airport assessments is consistent with generally accepted risk management principles, which define risk as a function of threat, vulnerability, and consequence. TSA developed a 2011 strategic implementation plan. This plan establishes annual program objectives and milestones, and links program activities to broader agency aviation security goals providing a road map for their completion. TSA began declassifying its foreign airport assessment reports. Since 2007, TSA has been declassifying the reports from Confidential and designating them SSI to facilitate better access to and the dissemination of program results, while still providing protection for foreign government information deemed sensitive. TSA officials noted that the declassification of assessment results is essential for TSA because staff could not easily access the specifics of prior results and deficiencies from reports that have not yet been declassified. TSA formed the Capacity Development Branch (CDB). TSA created the CDB in 2007 to manage all TSA international aviation security capacity building assistance efforts, including requests for assistance in response to a host government’s airport assessment results. Through CDB, TSA provides six aviation security training courses that address, among other things, preventive security measures, incident management and response, and cargo security. In 2008, TSA also developed the Aviation Security Sustainable International Standards Team (ASSIST) Program to provide more long-term, sustainable, technical aviation security assistance to select foreign countries. Thus far, TSA has partnered with five countries under the ASSIST program: St. Lucia, Liberia, Georgia, Haiti, and Palau. See appendix V for more specific information on TSA assistance provided these countries under ASSIST. TSA developed assessment tracking tools to provide better oversight of program information. In 2007 we reported that TSA did not have controls in place to track the status of scheduled foreign airport assessments, including whether assessments were actually conducted or whether they were deferred or canceled, which could make it difficult for the agency to ensure that scheduled assessments are actually completed. We also reported that TSA did not always document the results of follow-up conducted by TSA staff to determine progress made by foreign governments in addressing security deficiencies identified by TSA inspectors during assessments, and that such follow-up would enable the agency to have access to updated information on the security status of foreign airports that provide service to the United States. In response to the findings and recommendations we made in our 2007 report, TSA implemented a tool to track its annual foreign airport assessment schedule, including reasons why assessments were deferred or canceled, and a tracking sheet to compile the results of its prior airport assessments. Specifically, this sheet documents the frequency with which foreign airports complied with particular categories of ICAO standards, such as passenger screening, baggage screening, and access controls, among others. TSA also developed a tool whereby deficiencies previously identified during an assessment can be tracked to monitor the progress made by host governments in rectifying security deficiencies. TSA’s Director of Global Compliance told us these tracking sheets have helped TSA provide better oversight and monitoring of key program information. TSA signed several working arrangements to facilitate its assessments. Since 2007, TSA has signed a multilateral working arrangement with the European Union (EU), and several bilateral working arrangements with individual foreign nations, to facilitate, among other things, TSA assessments at foreign airports. Specifically, in 2007, we reported that TSA had taken steps toward harmonizing airport assessment processes with the European Commission (EC). As part of these efforts, TSA and the EC established six working groups to facilitate, among other things, sharing of SSI between TSA and the EC, TSA observation of EU airport assessments, as well as EC observation of TSA assessments of airports in the United States. In 2008, TSA signed a multilateral working arrangement with the EU to facilitate joint assessments and information sharing between TSA and the EU. Specifically, under the arrangement, TSA and the EC coordinate assessment schedules annually to identify airport locations at which to conduct joint assessments. EC officials we interviewed told us their main goal under the arrangement was to better leverage resources and reduce the number of TSA visits per year to European airports because of concerns from EU member states on the frequency of visits from EC and U.S. audit teams. TSA officials we interviewed said they also wanted to better leverage existing resources while ensuring continued TSA access to European airports for the purposes of conducting security assessments. While TSA agreed to conduct assessments at EU airports no more than once every 5 years, EU and TSA officials we interviewed said the EC permits TSA to approach a country bilaterally if scheduling conflicts do not allow for an assessment to be conducted jointly. TSA also occasionally conducts table-top reviews in place of on-site airport visits. Specifically, if the EC inspected an airport within the last 2 years, TSA will sometimes meet with EC officials to review the EC inspection report–- referred to as a table top–-which typically contains enough information for TSA to make its evaluations. However, TSA officials said table-top reviews should not serve as a permanent substitute for TSA onsite assessments. TSA has also entered into several bilateral working arrangements with foreign countries to facilitate its airport assessments. Specifically, TSA has signed arrangements with Brazil, Germany, India, the United Kingdom, Russia, and is in the process of establishing arrangements with Nicaragua and Portugal. These arrangements specify certain conditions, practices, and protocols for sharing key information with TSA, but also impose some constraints, such as limiting the number of TSA visits per year, including the length of the visit. Even with TSA’s efforts to enhance the program, challenges remain in several areas: gaining access to some foreign airports, developing an automated database to manage program information, prioritizing and providing training and technical assistance, and expanding the scope of TSA’s airport assessments to include all-cargo operations, as discussed below. TSA access to some foreign airports has been limited by sovereignty concerns. In 2007, we reported that some host governments expressed concerns that TSA assessments infringe upon their authority to regulate airports and air carriers within their borders, and that some foreign governments had denied TSA access to their airports. TSA’s multilateral and bilateral arrangements have helped to facilitate assessments in some foreign countries, but TSA has had difficulty gaining access to some foreign airports due to sovereignty concerns raised by host governments. For example, TSA has not been able to assess any of the four airports in Venezuela or conduct TSA compliance inspections for air carriers flying out of Venezuela into the United States, including U.S. air carriers, since 2006. Thus, TSA has been unable to determine the security posture of flights from Venezuela bound for the United States. On September 8, 2008, the Secretary of Homeland Security issued a Public Notice that informs the public that the U.S. Government is unable to determine whether airports in Venezuela that serve as the last point of departure for nonstop flights to the United States maintain and carry out effective aviation security measures. A TSA official told us that a TSA representative traveled to Venezuela recently to start discussions with the Venezuelan government about TSA regaining access to Venezuelan airports to conduct assessments and air carrier inspections. Since it is unclear what the outcome of these discussions will be, and when TSA will regain access to airports and air carriers in Venezuela, the Public Notice remains in effect. Until TSA is able to regain access to airports and air carriers in Venezuela to conduct assessments and air carrier inspections, the agency will be unable to determine to what extent, if at all, airports in Venezuela are maintaining and carrying out effective security measures, or the extent to which air carriers are complying with TSA security requirements for U.S.-bound flights. The Director of Global Compliance indicated TSA is concerned about sovereignty issues with other foreign countries and their willingness to allow TSA inspectors to assess their airports and air carriers. TSA has been working on establishing a Memorandum of Understanding with one country to ensure continued TSA access to its airports. Moreover, TSA indicated that working arrangements it developed with two other countries were undertaken to address government sovereignty concerns over TSA’s assessments. TSA has experienced difficulties developing an automated database. Since 2007, TSA has been in the process of trying to develop an integrated, automated database management system to allow for more timely submission of foreign airport assessment results, as well as perform more substantive analysis and comparisons of foreign airport trends and issues. Specifically, in response to our 2007 recommendations, TSA stated that they were exploring an automated means of capturing foreign airport assessment data to track airport deficiencies identified, corrective actions recommended by TSA, and any resulting actions taken by the host nation. In 2010, TSA field tested a system, called the Foreign Airport Assessment Reporting System (FAARS), which was intended to store results of airport assessments for easier data extraction and manipulation. For example, while airport assessments are currently prepared as word documents (typically around 60 pages in length), FAARS was intended to put information into database fields, which would have allowed the Office of Global Strategies (OGS) to run reports on specific indicators, such as which foreign airport checkpoints are using Advanced Imaging Technology (AIT) units. However, the Director of Global Compliance told us FAARS ultimately did not meet TSA’s needs and was discontinued because, among other things, data entry was cumbersome and certain data fields could not be edited. Further, the database was not web-based, and instead had to be installed on users’ hard drives, not allowing for easy integration of multiple users and data. In April 2011, TSA developed a comprehensive functional requirements document, which outlines the capabilities and functions required for a new proposed software solution. TSA officials told us they provided it to officials in TSA’s Offices of Acquisition and Information Technology who developed a contract for developing, testing, fielding, and distributing a software solution that meets programs needs. TSA officials told us that the contractor who will develop the product has received the Statement of Work, and initial implementation of the product is planned for fiscal year 2012, with full capability planned to follow in fiscal year 2013. Given these time frames, it will be important for TSA to monitor the status of this effort to ensure a solution is implemented within reasonable time frames, particularly since we raised this issue in our 2007 report and it is still not clear when a solution will be fully vetted and implemented. TSA’s Director of Global Compliance also told us that identifying a database management system that meets of the needs of the program has been a long-standing challenge for the program. TSA’s training and technical assistance efforts face several challenges, and TSA’s new equipment loan program has raised concerns. TSA has initiated several capacity building efforts since our 2007 report, but these efforts have been affected by conflicting Department of State priorities, and TSA’s new equipment loan program has raised concerns about ensuring that loaned equipment is properly operated and maintained. Specifically, in addition to its own training courses and technical assistance, CDB provides training and technical assistance sponsored by the Department of State’s Anti-Terrorism Assistance (ATA) program and from the Organization of American States Inter-American Committee Against Terrorism, which is funded through the State Department. A CDB official stated they currently have eight employees and limited funds to provide aviation security technical and training assistance to partner nations overseas. As a result, a CDB official told us their training schedule often has a 3-month lag from when training is requested to when it is provided. In addition, four TSARs we spoke with stated they sometimes have difficulty getting their requests for TSA training from host nations fulfilled because of a lack of resources. According to a TSA official we spoke with, during the past 2 years, the U.S. government’s aviation security training and assistance priorities have been largely driven by State Department priorities. For example, of the 64 course offerings CDB had planned to provide in 47 foreign countries at the beginning of fiscal year 2011, 33 were sponsored by State ATA or the Organization of American States, and some number of those countries have high-risk airports as identified by TSA. In addition, TSA’s 2010 training schedule showed that of the 53 course offerings CDB provided in 33 countries, 29 were sponsored by State ATA or the Organization of American States, and some number of those countries have high-risk airports as identified by TSA. CDB and State Department officials told us they plan to work more closely in the future to better align their respective priorities. In addition to providing various types of training and technical assistance, TSA has also provided aviation security equipment to foreign countries to help these countries enhance their existing capabilities and practices. Specifically, one of TSA’s goals in its CDB fiscal year 2011–2015 Strategic Plan is to develop the necessary procedures for a system of long-term lending of decommissioned TSA screening equipment to partner countries. In accordance with authority granted under the Aviation and Transportation Security Act, TSA has undertaken to provide or loan security technologies and other equipment to foreign governments. According to TSA officials, the agency exercises this authority in coordination with the Department of State, and has obtained authority from the Department of State to negotiate and conclude agreements with foreign governments to provide technical cooperation and assistance, referred to as “Circular 175” agreements. For example, following the October 2010 discovery of explosive devices in air cargo packages bound for the United States from Yemen, TSA loaned six hand- held explosives trace detection devices to Yemen in an expedited fashion as a response to an emergent threat to help enhance the government’s passenger and cargo screening processes. TSA officials also told us that the agency has provided security technology and equipment to Aruba, Bahamas, Bermuda, Haiti, Ireland, and Malta under this same authority. While TSA has provided some equipment to foreign countries, TSA and EC officials we spoke with identified potential challenges associated with doing so. For example, TSA officials cited some foreign governments’ inability to properly maintain and operate TSA-provided screening equipment once provided. TSA officials told us it will be important for the agency to ensure that a foreign government has the appropriate staff, and that they are properly trained and ready to operate the equipment as well as conduct any necessary maintenance, to ensure that the U.S.-provided equipment is being used as intended and remains operational. TSA officials also explained that while under its existing authority it can donate or otherwise transfer equipment, such authority does not authorize TSA to provide maintenance and service contracts for this equipment. TSA officials we spoke with told us they would support congressional efforts to provide the agency with this additional authority. In addition, EC officials we interviewed identified similar challenges to their current and potential future efforts to provide various types of capacity building assistance to foreign countries. TSA officials said it will be important for TSA to establish user agreements with recipient countries that ensure U.S. government resources are not wasted or inappropriately used. Several factors may complicate TSA assessments of foreign all- cargo operations. Following the attempted bombing of an all-cargo flight bound for the United States from Yemen in October 2010, TSA decided to devote additional resources to assessing all-cargo airports. While TSA is still in the early planning stages of its efforts to assess all-cargo operations at foreign airports, several factors may complicate these efforts. Specifically, TSA’s Director of Global Compliance stated that the agency has identified 17 foreign airports that serve as all-cargo last points of departure to the United States. As of July 2011, TSA has conducted two all-cargo assessments of two airports in China. Moreover, TSA plans to assess two additional all-cargo airports by the end of fiscal year 2011. According to TSA, from these first visits, TSA is making some adjustments to the assessment process. For fiscal years 2012 through 2013, TSA plans to schedule visits to the remaining 15 airports that serve as all-cargo last points of departure to the United States, pending host government permission. However, TSA stated that it is too early to tell how many additional inspectors may be needed to complete these assessments. TSA officials we interviewed identified several factors that may complicate TSA’s assessments of all-cargo operations at foreign airports. For example, all of the 23 TSA inspectors we interviewed expressed concerns about incorporating additional assessment visits into their annual schedules given their current workloads. In addition, these officials stated that it is uncertain whether foreign governments will allow TSA inspectors to assess their all-cargo operations and all-cargo airports. For example, while TSA has several bilateral arrangements with foreign countries to facilitate its assessments, TSA officials told us these arrangements do not specify access to cargo operations or all-cargo airports. Moreover, all four cargo inspectors we met with said it is logistically difficult to assess “upstream” cargo originating from other non-last point of departure airports. These inspectors said these logistical challenges will be an important factor for the agency to consider when selecting foreign airports to assess as well as in making determinations on the security posture of cargo on flights departing foreign airports for the United States. In addition, these inspectors also said that travel to some foreign all-cargo operation airports may be logistically difficult because of the lack of direct passenger flights and may require long travel by car or train. The Director of Global Compliance acknowledged that this new effort is challenging and stated that the agency will address these issues on a case-by-case basis. However, the Director also stated that with the increase to the inspector workforce, the cross-training of generalist international aviation inspectors to perform cargo inspections, and the limited additional locations to visit, TSA will be able to perform these additional visits over the next 2 years. Based on our analysis of the results of TSA’s foreign airport assessments conducted during fiscal year 2006 through May 9, 2011, some number of the foreign airports TSA assessed complied with all of TSA’s aviation security assessment standards. However, TSA has identified serious or egregious noncompliance issues at a number of other foreign airports. Common areas of noncompliance included weaknesses in airport access controls and passenger and baggage screening. Moreover, our analysis of TSA’s assessments showed variation in compliance across regions, among various individual standards, and by airports’ risk level. For example, our analysis showed that some number of regions of the world had no airports with egregious noncompliance while some regions had several such airports. Specific information related to our analysis of TSA’s airport assessment results is deemed SSI. TSA has not taken steps to analyze or evaluate its foreign airport assessment results in the aggregate to identify regional and other trends over time, which could assist the agency in informing and prioritizing its future activities. TSA officials have access to results of foreign airport assessments dating back to fiscal year 1997, but they have not analyzed the information to gain insight into how foreign airports’ security posture may have changed over time or identified regional and other patterns and trends over time. Specifically, TSA’s airport assessment reports are collected in an online repository that can be accessed by employees, and TSA’s Director of Global Compliance compiles high-level information from each airport assessment in a tracking tool, which allows her to view the overall results of assessments without having to go back to individual narrative reports. However, according to TSA, the agency has not analyzed the data contained in this tracking tool, which could assist TSA in informing and prioritizing its future activities and assessing the results of its past assessment efforts. In addition, while the spreadsheet provides a snapshot of airports and their results compared to the ICAO standards, it does not indicate why a standard was not met by an airport. If TSA employees would like to know why a certain airport did not meet a standard in a previous year, they must locate and read the report for that assessment. TSA’s Director of Global Compliance told us that this is labor intensive, and makes it difficult to identify anomalies or trends over time. Standards for Internal Control in the Federal Government require agencies to ensure that ongoing monitoring occurs during the course of normal operations to help evaluate program effectiveness. TSA’s Director of Global Compliance as well as all TSA ROC managers and inspectors we interviewed agreed that information pertaining to identified vulnerabilities in foreign airports should be compiled in regional-, country-, and airport-specific aggregates to help conduct planning and assess the results of program activities. TSA’s Director of Global Compliance stated that TSA has prepared a vacancy announcement for a program analyst position which may, when filled, be tasked with compiling overall results and analyzing assessment results. TSA’s Director of Global Compliance as well as all ROC managers and inspectors we interviewed also agreed that analysis of foreign airport assessment results would be helpful in identifying the aviation security training needs of foreign aviation security officials. TSA has one internally funded program in place that is specifically intended to provide aviation security training and technical assistance to foreign aviation security officials. However, TSA also coordinates with other federal agencies, such as the Department of State, to identify global and regional training needs and provide instructors for the aviation security training courses State offers to foreign officials. While TSA does not always determine which foreign countries receive aviation security training and technical assistance offered by other federal agencies, TSA could use the cumulative results of TSA’s foreign airport assessments to better support TSA’s priorities for aviation security training and technical assistance. Moreover, with analysis of airport assessment results, TSA could better inform its risk management decision making by identifying trends and security gaps, and target capacity building efforts accordingly. Specifically, this evaluation could include an analysis of the frequency of noncompliance issues TSA inspectors identified, including regional variations and perspectives on the security posture of individual airports over time. Further, a mechanism to evaluate cumulative foreign airport assessment results could help the agency better allocate and target its future resources and better understand its results, including the impact the program is having on enhancing foreign nations’ ability to comply with ICAO standards. In 2007, we reported that TSA was taking steps to assess whether the goals of the foreign airport assessment program were being met, but that it had not yet developed outcome-based performance measures to evaluate the impact TSA assistance has on improving foreign airport compliance with ICAO standards. As a result, we recommended that TSA establish outcome-based performance measures to strengthen oversight of the program. While DHS officials agreed with the recommendation in 2007, according to TSA, the agency has not yet developed such measures. The goal of the foreign airport assessment program is to ensure the security of U.S.-bound flights by evaluating the extent to which foreign governments are complying with applicable security requirements. The Government Performance and Results Act (GPRA) of 1993, as amended by the GPRA Modernization Act of 2010, requires executive branch departments to use performance measures to assess progress toward meeting program goals and to help decision makers assess program accomplishments and improve program performance. Performance measures can be categorized either as outcome measures, which describe the results of carrying out a program or activity, or as output measures, which describe the direct products or services delivered by a program or activity, or as process measures, which address the type or level of program activities conducted, such as timeliness or quality. TSA has taken some steps to develop a variety of measures and is reporting this information to the Office of Management and Budget. These measures include: average number of international inspections conducted annually per percentage of foreign airports serving as Last Point of Departure operating in compliance with leading security indicators, percentage of countries with direct flights to the U.S. that are provided aviation security assistance, and percentage of countries/territories with no direct flights to the U.S. that are provided aviation security assistance. While these measures are useful in determining, for example, the percentage of airports operating in compliance with security indicators, they do not address the ultimate results of the program, as outcome measures could. Outcome-based measures could help determine the extent to which TSA programs that assess and provide training and technical assistance to foreign airports have helped to improve security at airports that service the United States. However, TSA’s Director of Global Compliance noted several possible challenges with applying such outcome measures to the assessment program. Specifically, the Director stated that the foreign airport assessment program is designed to identify—not correct—security deficiencies at foreign airports, and that whether or not foreign officials improve security at their airports is not within TSA’s control. The Director added that such measures may create a disincentive for inspectors to objectively assess an airport’s level of compliance. Despite these challenges, the Director acknowledged the importance of developing outcome measures and stated that their development should be the responsibility of TSA’s Office of Global Strategies, not individual programs within this office, such as the foreign airport assessment program that she leads. Even without full control over the outcomes associated with such measures, we continue to believe our prior recommendation is still valid and that it would be useful for TSA to develop reasonable outcome-based measures, such as the percentage of security deficiencies that were addressed as a result of TSA onsite assistance or related technical assistance and training offered by the CDB, and TSA recommendations for corrective action. As we previously recommended, such measures would help TSA establish greater accountability over the way in which TSA uses its resources and, in conjunction with its existing measures, enable the agency to evaluate and improve the impact of its assistance on improving security at foreign airports. While TSA has taken a number of steps to improve and streamline its foreign airport assessment program since our 2007 report, opportunities exist for TSA to make additional improvements in several key areas. For example, TSA has taken steps to make its foreign airport assessments more risk informed, but the agency lacks clearly defined criteria to determine a foreign airport’s level of noncompliance with ICAO standards. For example, as stated earlier, TSA provides each airport an overall vulnerability category, or score, of 1 through 5, which is a numerical representation of compliance or level of noncompliance with the ICAO standards the agency assesses each foreign airport against. However, TSA has not developed any specific criteria, definitions, or implementing guidelines to ensure ROC managers and other program management officials apply these categories consistently across airports. For example, the SOP does not define how to assess whether an airport should receive a vulnerability rating of 3—“capability exists, compliance is generally noted, shortfalls remain,” versus a vulnerability rating of 2—“capability exists with minor episodes of noncompliance.” In the absence of more specific and transparent criteria and guidance, it is not clear how TSA applied these related categories—which describe the level of noncompliance—to the results of the assessments, or whether they were applied consistently over time. The lack of documented guidance prevented us from making an analysis or comparison of how TSA made its determinations. This is particularly important given that these scores represent an overall assessment of an airport’s level of compliance or noncompliance with ICAO standards that TSA has deemed critical to airport security, and also are a key component of TSA foreign airport risk- ranking determinations. TSA’s Director of Global Compliance agreed these category determinations are largely subjective judgments based on many facts and circumstances. TSA’s Director of Global Compliance stated that it is challenging to establish specific guidance for how to assign these categories because of the numerous factors that can influence the decision for assigning vulnerability scores. The Director also noted that because she reviews each assessment report and weighs in on each assigned category, she in effect serves to institutionalize the scores and ensure they are consistent from airport to airport. Standards for Internal Control in the Federal Government call for controls and other significant events to be clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. This is especially important should key staff leave the agency. Although we recognize the inherently subjective nature of the standards, providing TSA decision makers with more specific criteria and definitions for determining a foreign airport’s level of compliance with ICAO standards would provide greater assurance that such determinations are consistent across airports over time. The Director acknowledged that additional guidance, such as examples to illustrate what these categories mean, could help ensure greater transparency and consistency over how airport vulnerability scores are determined. Such consistency is important since airport vulnerability determinations are used to calculate an airport’s overall security risk level, which in turn affects the program’s activities and resource needs. In addition, TSA officials we spoke with identified opportunities for TSA to increase program efficiency by conducting more targeted airport assessments. Specifically, ROC managers and inspectors at all the locations we visited stated there are opportunities for TSA to conduct more targeted, smaller scale assessments at foreign airports that could focus more exclusively on the key security issues at a particular airport rather than having inspectors conduct full-scale assessments every visit. For example, the ROC Manager of one location we visited stated that the Federal Aviation Administration previously conducted supplemental-type visits of foreign airports that were reduced in scope and only focused on specific issues or deficiencies that needed to be addressed. He said that TSA should consider ways to incorporate this type of assessment philosophy into its current operations as it may help further streamline the assessment process and associated time frames. ROC managers at all the locations we visited also said inspectors often know, from their prior visits and assessment reports, what specific issues are present at specific airports, and that focusing more time on key issues could provide a more effective way of addressing and correcting security deficiencies. Twenty of 23 inspectors we spoke with said this type of assessment would also reduce repetitive and duplicative data gathering. In addition, these inspectors stated they sometimes do not have the opportunity to conduct all necessary onsite operational observations, document reviews, and interviews because they spend a significant amount of time addressing other descriptive, less critical aspects of the assessment. They said more targeted risk-informed assessments would allow them to focus more time and attention on key security issues, resulting in higher quality and more useful assessment results. Exploring opportunities to conduct more targeted assessments could help TSA enhance the efficiency and value of TSA’s foreign airport assessment program. TSA’s Director of Global Compliance told us they have begun to conduct abbreviated and targeted airport assessments in some cases due to the security risks associated with traveling and working in certain countries. For example, in 2011 TSA conducted abbreviated assessments at airports in Mexico and Iraq, due to the current security situation, which focused on a select number of critical areas rather than on all topics typically covered during an assessment. While targeted or abbreviated assessments are viewed as beneficial in some circumstances, TSA’s Director of Global Compliance also stated that conducting a comprehensive assessment is important because inspectors may visit an airport only once every 3 years, to document any security changes, deficiencies, or improvements since the previous visit. The Director also raised a concern about conducting additional targeted assessments if they limited opportunities to conduct regularly scheduled comprehensive assessment visits. However, we believe TSA’s use of abbreviated or targeted assessments could be expanded in cases where it would not have a negative impact on the program. For example, as TSA works to systematically analyze the results of its assessments, it may determine that specific regions of the world need additional assistance in meeting certain critical standards. TSA could use this information to focus or target its assessments to address these higher risk scenarios, thus leveraging program resources. Such efforts are consistent with TSA’s ongoing risk- informed activities, as discussed earlier in this report. Moreover, we have previously reported that risk-informed, priority driven decisions can help inform decision makers when allocating finite resources to the areas of greatest need. In addition, TSA has not taken steps to systematically compile or analyze security best practice information that could contribute to enhancing the security of both foreign and U.S. airports. TSA officials acknowledged possible opportunities to better identify, compile, and analyze aviation security best practices through their assessments at foreign airports. We have previously reported that in order to identify innovative security practices that could help further mitigate terrorism-related risk to transportation sector assets, it is important to assess the feasibility as well as the costs and benefits of implementing security practices currently used by foreign countries. While TSA compiles information in its foreign airport assessment reports to evaluate the degree to which airports are in compliance with select ICAO standards, it does not have a process in place to identify and analyze aviation security best practices that are being used by foreign airports to secure their operations and facilities. TSA officials agreed that identifying relevant best practices could help TSA better leverage their assessment activities by assisting foreign airports in increasing their level of compliance with ICAO standards, as well as in identifying security practices and technologies that may be applicable to enhancing the security of U.S. airports. In December 2, 2010, testimony before the Senate Committee on Commerce, Science and Transportation, TSA’s Director of Global Compliance confirmed that there are a variety of ways in which foreign airports can effectively meet ICAO standards. For example, one airport might address access control security by using coded door locks and swipe cards, while another may lock its doors and limit the number of available keys to certain personnel. Airports may also establish perimeter security in different ways, such as through fencing or natural barriers. In addition, TSA inspectors, as part of the assessment, often obtain detailed information and understanding of the various types of security technologies and methods being used by foreign governments, which may also be applicable and cost-effective for U.S. airports. For example, while accompanying TSA inspectors during an airport assessment, we observed TSA inspectors being briefed on various passenger screening processes, technologies, and equipment that were comparable to, and in some cases may have exceeded, those used in the U.S. We believe establishing a mechanism to systematically compile and analyze this type of information could help ensure TSA is more effectively able to assist foreign airports in meeting ICAO standards and improve security practices, as well as identify security practices and technologies that may be applicable to enhancing the security of U.S. airports. Securing commercial aviation operations remains a daunting task—with hundreds of airports and thousands of flights carrying millions of passengers and pieces of checked baggage to the United States every year. TSA’s foreign airport assessment program is aimed at enhancing this system by identifying critical security weaknesses and gaps in airports serving the United States, which in turn can help inform and guide needed efforts to mitigate these deficiencies. TSA has taken a number of actions to enhance its foreign airport assessment program since 2007, but additional steps can help further strengthen the program. For example, developing a mechanism to evaluate assessment results to determine security trends and patterns could enable TSA to target and prioritize future assessment activities, including training and other capacity building resources. Moreover, establishing criteria and guidance for determining the vulnerability of individual foreign airports would provide for greater consistency of these vulnerability ratings across airports over time. Such consistency is important since airport vulnerability determinations are used to calculate an airport’s overall security risk level. Further, exploring the feasibility of conducting more targeted assessments could help enhance the efficiency and value of TSA’s foreign airport assessment program. Moreover, systematically compiling information on aviation security best practices could help ensure TSA is more effectively able to assist foreign airports in meeting ICAO standards and improve security practices, as well as identifying security practices and technologies that may be applicable to enhancing the security of U.S. airports. To help further enhance TSA’s foreign airport assessment program, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to take the following three actions: Develop a mechanism to evaluate the results of completed assessment activities to determine any trends and target future activities and resources. This evaluation could include frequency of noncompliance issues, regional variations, and perspectives on the security posture of individual airports over time. Establish criteria and guidance to assist TSA decision makers when determining the vulnerability rating of individual foreign airports. Consider the feasibility of conducting more targeted assessments and systematically compiling information on aviation security best practices. We provided a draft of the sensitive version of this report to DHS and TSA on September 1, 2011, for review and comment. DHS provided written comments which are reprinted in appendix VI. In commenting on our report, DHS stated that it concurred with all three of the recommendations and identified actions taken or planned to implement them. DHS also highlighted new initiatives under way by the Office of Global Strategies. Regarding the first recommendation that TSA develop a mechanism to evaluate the results of completed assessment activities to determine any trends and target activities and resources, and that this evaluation could include frequency of noncompliance issues, regional variations, and perspectives on the security posture of individual airports over time, DHS concurred. DHS stated that TSA has taken several steps to address this recommendation including utilizing a program analyst to create analyses reflecting temporal and site-specific trends and anomalies. DHS also stated that TSA established a project team to evaluate regional, country, and airport vulnerabilities and determine those problem areas that could be effectively addressed by training. DHS also noted that TSA is developing workshops that can be presented by inspectors at the conclusion of an airport assessment which will be tailored to address specific shortfalls observed during the assessment, which could be effectively mitigated through training. These actions, when fully implemented, should address the intent of the recommendation. DHS concurred with the second recommendation that TSA establish criteria and guidance to assist TSA decision makers when determining the vulnerability rating of individual foreign airports. DHS stated that the most recent version of the Foreign Airport Assessment Program Standard Operating Procedures now contains several scenarios for managers to use as a set of guidelines in determining the vulnerability rating for each open standard and for the airport overall. DHS also stated that the Director of Global Compliance and ROC managers will collaborate on the development of a scenario archive to promote more long-term consistency in the event that key staff leave the agency. We support TSA’s efforts to ensure greater transparency and consistency over how airport vulnerability scores are determined and believe it will be important for TSA to provide sufficient detail in the criteria and guidance that the agency develops. Such actions, when fully implemented, should address the intent of the recommendation. DHS concurred with the third recommendation that TSA consider the feasibility of conducting more targeted assessments and systematically compiling information on aviation security best practices. In its response, DHS stated that TSA is developing a pre-audit questionnaire that will be sent to each host government in advance of a planned airport assessment which will assist assessment teams in obtaining administrative information and key documents, such as the Airport Security Program, prior to the visit. DHS added that when the questionnaire is returned to TSA, the agency will obtain an official translation of all submitted items so that the assessment team has a better understanding of the current policies, procedures, and practices in place at the site. According to DHS, this practice may enable the team to tailor its efforts at the airports to focus on those areas of concern as indicated in the responses to the questionnaire, as well as the critical standards. DHS stated that TSA plans to complete development of the questionnaire by mid-fiscal year 2012, with wide-scale deployment beginning in October 2012. We support TSA’s planned actions but also believe that there may be additional opportunities for TSA to expand its use of targeted assessments as it works to implement the first recommendation related to developing a mechanism to evaluate the results of completed assessment activities to determine any trends and target activities and resources. For example, as TSA works to systematically analyze the results of its assessments, it may determine that specific regions of the world need assistance in meeting certain critical standards. Such action, in conjunction with TSA’s planned efforts, would meet the intent of the recommendation. With regard to aviation security best practices, DHS stated that the five volumes of the International Civil Aviation Organization (ICAO) Security Manual for Safeguarding Civil Aviation Against Acts of Unlawful Interference (Document 8973) contains the globally-recognized best practices and alternative methods for meeting the ICAO standards and recommended practices. DHS stated that TSA participates in the development and review of this document and draws from it when recommending improvements to foreign airport authorities. However, it noted that an infrequently-populated portion of the foreign airport assessment reports is available for inspectors to capture particularly noteworthy practices. DHS stated that during fiscal year 2012, inspectors will be encouraged to more conscientiously identify and document new approaches encountered at airports that are not reflected in the security manual but effectively address the ICAO standards and recommended practices. We support these efforts but also believe that it will be important for TSA to capture information identifying security best practices and technology that may be applicable to enhancing the security of U.S. airports. Such action, in conjunction with TSA’s planned efforts, would meet the intent of the recommendation. DHS also provided us with technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. This report also will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To examine the efforts made by the Transportation Security Administration (TSA) to determine whether foreign airports that provide service to the United States are maintaining and carrying out effective security measures, we addressed the following questions: (1) to what extent has TSA taken steps to enhance its foreign airport assessment program since 2007, and what challenges remain; (2) what are the results of TSA’s foreign airport assessments, and to what extent does TSA use the results of these assessments to guide its future assessment activities; and (3) what opportunities, if any, exist to enhance the value of TSA’s foreign airport assessment program? To collectively address all three questions, we reviewed relevant laws and regulations, including statutory provisions that identify specific actions to be taken by the Secretary of Homeland Security when the Secretary determines that a foreign airport does not maintain and carry out effective security measures. We reviewed various TSA program management and strategic planning documents and interviewed TSA officials located at TSA headquarters and in the field. We interviewed other federal and nonfederal stakeholders, such as the Department of State, International Civil Aviation Organization (ICAO), and the European Commission (EC). We outline the specific steps taken to answer each objective below. To determine the steps TSA has taken to enhance its foreign airport assessment program since 2007, we reviewed various TSA program management and strategic planning documents to identify revisions to its current and planned future strategy. Specifically, we reviewed TSA’s 2010 Foreign Airport Assessment Program Standard Operating Procedures (SOP) document, which prescribes program and operational guidance for assessing security measures at foreign airports, and informs TSA personnel at all levels of what is expected of them in the implementation of the program. We also reviewed the job aids TSA inspectors use during each assessment, which ensure that the TSA-specified ICAO aviation security standards and recommended practices are fully evaluated during each assessment. To determine TSA’s current and planned future strategy, we reviewed available strategic planning documents that TSA uses to guide its program. Specifically, we reviewed TSA’s Office of Global Strategies International Strategy to Enhance Aviation Security for 2010–2012, TSA’s Office of Global Strategies Global Compliance Strategic Implementation Plan Fiscal Year 2011, and the TSA Capacity Development Strategic Plan for fiscal years 2011–2015. In addition, we also obtained and reviewed multilateral and bilateral arrangements TSA has established with the European Union (EU) and several foreign nations to facilitate coordination in the area of aviation security, including facilitation of TSA’s foreign airport assessments. To understand how TSA assesses and manages its foreign airport risk information, we obtained and reviewed various program documents. Specifically, we obtained and reviewed documents on TSA’s methodology for assigning individual risk rankings (called Tier rankings) to each foreign airport it assesses. TSA’s rankings are based on the likelihood of a location being targeted, the protective measures in place at that location, and the potential impact of an attack on the international transportation system. Airports are then categorized as high, medium, or low risk. While we did not evaluate the quality of TSA’s risk rankings, as this analysis was outside the scope of our work, we generally determined that the rankings addressed all three components of risk (threat, vulnerability, and consequence). To obtain a greater understanding of the foreign airport assessment process, including how TSA works with host nation officials, we accompanied a team of TSA inspectors during an assessment of the Toronto Pearson International Airport. We based our selection on several factors, including the airport locations TSA had plans to assess during the course of our audit work, host government willingness to allow us to accompany TSA, and travel costs. To obtain information on the extent to which TSA provided oversight of its assessment efforts, we obtained and reviewed various TSA program management documents and tools TSA uses to track and manage information for the program. Specifically, we reviewed the TSA Airport and Air Carrier Comprehensive Tool (known as the A.C.T.), which TSA uses to track its foreign airport assessment schedule, including when various airports are due to be assessed. We also reviewed the Open Standards and Recommended Practices Tracking Tool, which the TSA Representatives (TSAR) use to monitor and track a foreign airport’s progress in resolving security deficiencies identified by TSA inspectors during previous assessments. In addition, we reviewed the tracking sheet TSA’s Director of Global Compliance uses to compile and track current- and prior-year assessment results, including individual airport vulnerability scores and information on which specific ICAO standards were in noncompliance. To obtain stakeholder views and perspectives on steps TSA has taken to enhance its foreign airport assessment program since 2007, we interviewed and obtained information from various federal and nonfederal stakeholders. Specifically, we interviewed TSA officials located in the Office of Global Strategies (OGS), Global Compliance (GC), Office of International Operations (OIO), and Capacity Development Branch (CDB). In addition, we also conducted site visits to four of the five TSA Regional Operations Centers (ROC) located in Los Angeles, Dallas, Miami, and Frankfurt where we met with the ROC managers and 23 international aviation security inspectors who conduct TSA’s foreign airport assessments. We based our site visit selections on the number of available inspectors at each location and geographic dispersion. We conducted telephone and in-person interviews with 9 of the 27 TSARs, located in various embassies and consulates throughout the world, who schedule TSA airport assessment visits and follow up on host governments’ progress in addressing identified security deficiencies. When possible, we conducted in-person interviews with TSARs who were at TSA ROCs during our site visits. We based our TSAR selections on geographic dispersion and varying years of experience. During each of these interviews, we discussed these officials’ responsibilities related to the program, including their role in assisting foreign officials in correcting security deficiencies identified during assessments. We met with Department of State officials to better understand how they coordinate with TSA through their Anti-Terrorism Assistance (ATA) Program and other related efforts aimed at assisting foreign partners’ capacity to secure their airports. Additionally, we met with officials from the EC, International Air Transport Association, and ICAO to discuss efforts and programs these organizations have in place to enhance international aviation security. We interviewed or received responses to questions from five foreign embassies to obtain perspectives of foreign transportation security officials on TSA’s airport assessment program. We based our selection on geographic dispersion and countries with the highest risk airports, as designated by TSA. However, information from our interviews with government officials, members of the aviation industry, and TSA officials and inspectors cannot be generalized beyond those that we spoke with because we did not use statistical sampling techniques in selecting individuals to interview. To identify challenges affecting TSA’s foreign airport assessment program, we interviewed TSA program management officials and field officials located at the TSA ROCs on the challenges they experience obtaining access to foreign airports to conduct assessments, the development of an automated database management system, and the provision of aviation security training to foreign governments. In addition, we met with TSA’s Director of Global Compliance, and ROC managers and inspectors located in the field, to discuss potential future challenges TSA may experience when attempting to conduct assessments at foreign airports with all-cargo flights to the United States. Specifically, we obtained their perspectives on foreign governments that have been reluctant to allow TSA inspectors to visit their airports. We interviewed TSA’s Director of Global Compliance on the agency’s progress in developing an automated database to manage program information, including the challenges the agency has experienced finding a solution that meets program needs. We conducted telephone and in-person interviews with nine TSARs to obtain their perspectives on challenges to scheduling airport assessment visits. In addition, we interviewed officials within TSA’s CDB to better understand the scope and types of requests for assistance they receive from foreign countries, including challenges they experience in attempting to provide assistance, such as resource constraints and aligning security priorities with the Department of State. To determine the results of TSA’s foreign airport assessments and the extent to which the agency evaluates its results to inform future activities, we interviewed TSA officials on the results of its assessments, obtained and reviewed assessment reports and relevant program documents, and conducted our own independent analysis of TSA’s assessment results. To better understand the scope and type of information contained in TSA’s foreign airport assessment reports, we obtained and reviewed the most recently available assessments for all high-risk airports. We also selected a randomized sample of assessment reports from current and prior years. We reviewed sections of these reports for completeness and general consistency with TSA guidance for preparing assessment reports. We obtained and reviewed TSA’s foreign airport risk-ranking sheet to better understand which airports TSA identified as high, medium, and low risk, including how the results of TSA’s assessments influence an airport’s risk ranking. In addition, we obtained and reviewed TSA’s foreign airport assessment program vulnerability results tracking sheet used by the Director of Global Compliance to compile and track current and prior- year assessment results. This tracking sheet included records of TSA’s compliance assessments for each airport that TSA assessed from fiscal year 1997 through May 9, 2011. Specifically, the tracking sheet recorded assessment results for each of the ICAO standards used in the airport assessments, as well as an overall vulnerability score of 1 through 5 assigned after each assessment. This overall vulnerability score is a representation of compliance or noncompliance with all the ICAO standards against which TSA assesses foreign airports. We interviewed the Director of Global Compliance on the steps taken to develop the tracking sheet, including how TSA manages and updates data, and how TSA assigns vulnerability scores. In addition, we conducted our own independent analysis of TSA’s assessment results from fiscal year 2006 through May 9, 2011. Specifically, we analyzed data from TSA’s foreign airport assessment program vulnerability results tracking sheet to identify the number of airports in each vulnerability category by region. We also analyzed TSA assessment results data to determine the frequency with which foreign airports complied with particular ICAO standards, such as access control, quality control, passenger screening, and baggage screening, among others. For those airports that TSA has identified as high risk, we analyzed TSA assessment results data to determine the number of resolved and remaining compliance issues at high-risk airports by region, as well as the level of noncompliance found at high-risk airports. To assess the reliability of TSA’s data, we selected a random sample of records from TSA’s foreign airport assessment program vulnerability results tracking sheet. Next, we examined the corresponding reports to locate those ICAO standards that had been identified as less than fully compliant in the tracking sheet (a score of 2 through 5 on a 5-point scale). The actual scores assigned to the compliance ratings and found in the tracking sheet were determined by the Director of Global Compliance using guidance in the 2010 SOP in consultation with individuals involved in the assessment process (ROC managers, Supervisory Transportation Security Specialists, and Transportation Security Specialists). Our comparison of the results in the tracking sheet with the compliance information provided in the corresponding reports did not match in several cases. However, in discussions with TSA we determined that the differences were the result of changes to the ICAO standards used in the assessments or a change in the definition of the standards. Specifically, TSA told us that Amendments 10 and 11 to ICAO Annex 17 changed the definitions of some standards, and the numbers assigned to identify them. For example, a standard concerning Hold Baggage Security is now identified as 4.5.1. However, in years prior to Amendments 10 and 11 to Annex 17, that same standard was identified as 4.1.1. TSA’s Director of Global Compliance told us that she updated the foreign airport assessment program vulnerability results tracking sheet with the new definitions and numbers, and the associated results, each time an ICAO amendment came out. As a result, we determined that any analysis of the assessment results for specific ICAO standards would need to take into account the changes TSA identified. Based on our overall analysis of the data and reports, we determined that the data were sufficiently reliable to provide a general indication, by type or category, of the standards TSA assesses against and the level of compliance, and frequency of compliance, for TSA’s airport assessments over the period of our analysis. In addition, we interviewed TSA’s Director of Global Compliance on the steps TSA takes to analyze its assessment results to inform the agency’s future efforts and compared these efforts to Standards for Internal Control in the Federal Government. We discussed the status of implementation of our 2007 recommendation to develop outcome-oriented performance measures to evaluate the impact that TSA assessments have on improving foreign airport compliance with ICAO standards. We interviewed TSA managers and inspectors located in the field on their roles and responsibilities in determining and documenting assessment results. We assessed TSA’s efforts to analyze its assessment results against Standards for Internal Control in the Federal Government, which require agencies to ensure that ongoing monitoring occurs during the course of normal operations to help evaluate program effectiveness. To identify opportunities for TSA to enhance the value of TSA’s foreign airport assessment program, we reviewed all relevant program management and strategic documentation, and interviewed TSA officials as well as various other federal and nonfederal stakeholders. Specifically, we reviewed the 2011 Foreign Airport Assessment Program SOP and job aids; OGS, GC, and CDB strategic planning documents; foreign airport risk assessment and ranking information; program management tools TSA uses to track and manage its schedule and the status of foreign airport security deficiencies; and reviewed TSA foreign airport assessment results and reports. We also reviewed our prior work concerning how risk-informed and priority driven decisions can help inform agency decision makers in allocating finite resources to the areas of greatest need. Moreover, we reviewed the process TSA uses to assign vulnerability ratings of 1-5 to each foreign airport it assesses and then evaluated this process against Standards for Internal Control in the Federal Government, which call for controls and other significant events to be clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. In addition, we visited the Toronto Pearson International Airport to observe TSA inspectors during the assessment thereby obtaining a greater understanding of the foreign airport assessment process, including opportunities for TSA to improve its program. We reviewed prior GAO work discussing the importance of identifying potential best practices, as part of conducting U.S. federal government security assessments in other countries. To obtain stakeholder views and perspectives on opportunities to enhance the program, we interviewed and obtained information from various TSA and nonfederal stakeholders. Specifically, we interviewed TSA headquarters officials in GC, OIO, and CDB. During our site visits, we interviewed ROC managers and international inspectors on possible opportunities that exist for TSA to improve its foreign airport assessment program. We discussed opportunities to improve the program during our telephone and in-person interviews with nine TSARs. In addition, we discussed ways in which TSA could improve its program during our interviews with officials from the EC, ICAO, and select foreign embassies. We conducted this performance audit from August 2010 through October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. If the Secretary, based on the TSA airport assessment results, determines that a foreign airport does not maintain and carry out effective security measures, he or she must, after advising the Secretary of State, take secretarial action. Below is a list of these actions. Figure 4 describes the process for taking secretarial action against an airport. 90-day action—The Secretary notifies foreign government officials that they have 90 days to address security deficiencies that were identified during the airport assessment and recommends steps necessary to bring the security measures at the airport up to ICAO standards. Public notification—If, after 90 days, the Secretary finds that the government has not brought security measures at the airport up to ICAO standards, the Secretary notifies the general public that the airport does not maintain and carry out effective security measures. Modification to air carrier operations—If, after 90 days, the Secretary finds that the government has not brought security measures at the airport up to ICAO standards: The Secretary may withhold, revoke, or prescribe conditions on the operating authority of U.S.-based and foreign air carriers using that airport to provide transportation to the U.S., following consultation with appropriate host government officials and air carrier representatives, and with the approval of the Secretary of State. The President may prohibit a U.S.-based or foreign air carrier from providing transportation between the United States and any foreign airport that is the subject of a secretarial determination. Suspension of service— The Secretary, with approval of the Secretary of State, shall suspend the right of any U.S.-based or foreign air carrier to provide service to or from an airport if the Secretary determines that a condition exists that threatens the safety or security of passengers, aircraft, or crew traveling to or from the airport, and the public interest requires an immediate suspension of transportation between the United States and that airport. TSA inspectors use 40 ICAO standards and 1 recommended practice when conducting foreign airport assessments. Of the 40, TSA identified 22 as critical. These 22 critical standards are in bold. 3.2.1 Each Contracting State shall require each airport serving civil aviation to establish, implement and maintain a written Airport Security Program appropriate to meet the requirements of the National Civil Aviation Security Programme. 3.2.2 Each Contracting State shall ensure that an authority at each airport serving civil aviation is responsible for coordinating the implementation of security controls. 3.2.3 Each Contracting State shall ensure that an airport security committee at each airport serving civil aviation is established to assist the authority mentioned under 3.2.2 in its role of coordinating the implementation of security controls and procedures as specified in the airport security programme. 3.4.1 Each Contracting State shall ensure that the persons implementing security controls are subject to background checks and selection procedures. 3.4.2 Each Contracting State shall ensure that the persons implementing security controls possess all competencies required to perform their duties and are appropriately trained according to the requirements of the national civil aviation security programme and that appropriate records are maintained up to date. Relevant standards of performance shall be established and initial and periodic assessments shall be introduced to maintain those standards. 3.4.3 Each Contracting State shall ensure that the persons carrying out screening operations are certified according to the requirements of the National Civil Aviation Security Program to ensure that performance standards are consistently and reliably achieved. 3.4.5 Each Contracting State shall ensure that the implementation of security measures is regularly subjected to verification of compliance with the national civil aviation security programme. The priorities and frequency of monitoring shall be determined on the basis of risk assessment carried out by the relevant authorities. 3.4.6 Each Contracting State shall arrange for audits, tests, surveys and inspections to be conducted on a regular basis, to verify compliance with the National Civil Aviation Security Program and to provide for the rapid and effective rectification of any deficiencies. 4.2.1 Each Contracting State shall ensure that the access to airside areas at airports serving civil aviation is controlled in order to prevent unauthorized entry. 4.2.2 Each Contracting State shall ensure that security restricted areas are established at each airport serving civil aviation designated by the State based upon a security risk assessment carried out by the relevant national authorities. 4.2.3 Each Contracting State shall ensure that identification systems are established in respect of persons and vehicles in order to prevent unauthorized access to airside areas and security restricted areas. Identity shall be verified at designated checkpoints before access is allowed to airside areas and security restricted areas. 4.2.4 Each Contracting State shall ensure that background checks are conducted on persons other than passengers granted unescorted access to security restricted areas of the airport prior to granting access to security restricted areas. 4.2.5 Each Contracting State shall ensure that the movement of persons and vehicles to and from the aircraft is supervised in security restricted areas in order to prevent unauthorized access to aircraft. 4.2.6 Each Contracting State shall ensure that persons other than passengers, together with items carried, being granted access to security restricted areas are screened; however, if the principle of 100 per cent screening cannot be accomplished, other security controls, including but not limited to proportional screening, randomness and unpredictability, shall be applied in accordance with a risk assessment carried out by the relevant national authorities. 4.2.7 Each Contracting State shall ensure that vehicles being granted access to security restricted areas, together with items contained within them, are subject to screening or other appropriate security controls in accordance with a risk assessment carried out by the relevant national authorities. Measures Relating to Aircraft: 4.3.1 Each Contracting State shall ensure that aircraft security checks of originating aircraft engaged in commercial air transport movements are performed or an aircraft security search is carried out. The determination of whether it is an aircraft security check or a search that is appropriate shall be based upon a security risk assessment carried out by the relevant national authorities. 4.3.2 Each Contracting State shall ensure that measures are taken to ensure that any items left behind by passengers disembarking from transit flights are removed from the aircraft or otherwise dealt with appropriately before departure of an aircraft engaged in commercial flights. 4.3.3 Each Contracting State shall require its commercial air transport operators to take measures as appropriate to ensure that during flight unauthorized persons are prevented from entering the flight crew compartment. 4.3.4 Each Contracting State shall ensure that an aircraft subject to 4.3.1 is protected from unauthorized interference from the time the aircraft search or check has commenced until the aircraft departs. Measures Relating to Passengers and Their Cabin Baggage: 4.4.1 Each Contracting State shall establish measures to ensure that originating passengers of commercial air transport operations and their cabin baggage are screened prior to boarding an aircraft departing from a security restricted area. 4.4.2 Each Contracting State shall ensure that transfer passengers of commercial flights and their cabin baggage are screened prior to boarding an aircraft, unless it has established a validation process and continuously implements procedures, in collaboration with the other Contracting State where appropriate, to ensure that such passengers and their cabin baggage have been screened to an appropriate level at the point of origin and, subsequently, protected from unauthorized interference from the point of screening at the originating airport to the departing aircraft at the transfer airport. 4.4.3 Each Contracting State shall ensure that passengers and their cabin baggage which have been screened are protected from unauthorized interference from the point of screening until they board their aircraft. If mixing or contact does take place, the passengers concerned and their cabin baggage shall be re-screened before boarding an aircraft. 4.4.4 Each Contracting State shall ensure that passengers and their cabin baggage which have been screened are protected from unauthorized interference from the point of screening until they board their aircraft. If mixing or contact does take place, the passengers concerned and their cabin baggage shall be re-screened before boarding an aircraft. 4.5.1 Each Contracting State shall establish measures to ensure that originating hold baggage is screened prior to being loaded onto an aircraft engaged in commercial air transport operations departing from a security restricted area. 4.5.2 Each Contracting State shall ensure that all hold baggage to be carried on a commercial aircraft is protected from unauthorized interference from the point it is screened or accepted into the care of the carrier, whichever is earlier, until departure of the aircraft on which it is to be carried. If the integrity of the hold baggage is jeopardized, the hold baggage shall be re-screened before being placed on board an aircraft. 4.5.3 Each Contracting State shall ensure that commercial air transport operators do not transport the baggage of passengers who are not on board the aircraft unless that baggage is identified as unaccompanied and subjected to additional screening. 4.5.4 Each Contracting State shall ensure that transfer hold baggage is screened prior to being loaded onto an aircraft engaged in commercial air transport operations, unless it has established a validation process and continuously implements procedures, in collaboration with the other Contracting State where appropriate, to ensure that such hold baggage has been screened at the point of origin and subsequently protected from unauthorized interference from the originating airport to the departing aircraft at the transfer airport. 4.5.5 Each Contracting State shall ensure that aircraft commercial air transport operators transport only items of hold baggage that have been individually identified as accompanied or unaccompanied, screened to the appropriate standard, and accepted for carriage on that flight by the air carrier. All such baggage should be recorded as meeting these criteria and authorized for carriage on that flight. 4.6.1 Each Contracting State shall ensure that appropriate security controls, including screening where practicable, are applied to cargo and mail, prior to their being loaded onto an aircraft engaged in passenger commercial air transport operations. 4.6.2 Each Contracting State shall establish a supply chain security process, which includes the approval of regulated agents and/or known consignors, if such entities are involved in implementing screening or other security controls of cargo and mail. 4.6.3 Each Contracting State shall ensure that cargo and mail to be carried on a passenger commercial aircraft are protected from unauthorized interference from the point screening or other security controls are applied until departure of the aircraft. 4.6.4 Each Contracting State shall ensure that operators do not accept cargo or mail for carriage on an aircraft engaged in passenger commercial air transport operations unless the application of screening or other security controls is confirmed and accounted for by a regulated agent, or such consignments are subjected to screening. Consignments which cannot be confirmed and accounted for by a regulated agent are to be subjected to screening. 4.6.5 Each Contracting State shall ensure that catering, stores and supplies intended for carriage on passenger commercial flights are subjected to appropriate security controls and thereafter protected until loaded onto the aircraft. 4.6.6 Each Contracting State shall ensure that merchandise and supplies introduced into security restricted areas are subject to appropriate security controls, which may include screening. 4.6.7 Each Contracting State shall ensure that security controls to be applied to cargo and mail for transportation on all-cargo aircraft are determined on the basis of a security risk assessment carried out by the relevant national authorities. Measures Relating to Special Categories of Passengers: 4.7.1 Each Contracting State shall develop requirements for air carriers for the carriage of potentially disruptive passengers who are obliged to travel because they have been the subject of judicial or administrative proceedings. 4.8.1 Recommendation.— Each Contracting State should ensure that security measures in landside areas are established to mitigate possible threats of acts of unlawful interference in accordance with a risk assessment carried out by the relevant authorities. 5.1.4 Each Contracting State shall ensure that contingency plans are developed and resources made available to safeguard civil aviation, against acts of unlawful interference. The contingency plans shall be tested on a regular basis. 5.1.5 Each Contracting State shall ensure that authorized and suitably trained personnel are readily available for deployment at its airports serving international civil aviation to assist in dealing with suspected, or actual, cases of unlawful interference with civil aviation. 9.1.1 An aerodrome emergency plan shall be established at an aerodrome, commensurate with the airport operations and other activities conducted at the aerodrome. 9.10.3 Suitable means of protection shall be provided to deter the inadvertent or premeditated access of unauthorized persons into ground installations and facilities essential for the safety of civil aviation located off the aerodrome. TSA uses a multistep process to conduct its assessments of foreign airports. Figure 5 describes the process TSA uses. The mission of the ASSIST program is to raise and strengthen international aviation security standards in foreign countries and airports, and to ensure that improvements in standards are long-term and sustainable. Specifically, TSA deploys teams consisting of six to seven individuals for 1 week in partnership with the host nation in order to evaluate and develop recommendations for building the aviation security capacity. Following the initial visit, TSA conducts follow-up focused visits to deliver training and technical assistance when agreed upon by the host nation. To date, TSA has partnered with five foreign countries under the ASSIST program. These countries are St. Lucia, Liberia, Georgia, Haiti, and Palau. TSA selects countries to partner with based on a variety of factors, which include focusing on countries with last point of departure service to the United States, foreign airport risk rankings, a foreign government’s demonstrated willingness to engage TSA, and a foreign government’s demonstrated ability to sustain ASSIST initiatives after the conclusion of ASSIST. See below for specific information on the countries TSA partnered with during 2009-2011. St. Lucia was the first nation to partner with TSA under the ASSIST program. It was selected as the pilot country for ASSIST because it is a last point of departure location to the U.S., a popular destination for U.S. passengers, and the TSA Representative in the region requested the assistance. The inaugural survey visit to St. Lucia was conducted in January 2009. Subsequent follow-up visits were held in March and June of 2009, and focused on training in Emergency Communications, Improvised Explosive Device Familiarization, Essential Instructor Skills, and Basic Screener Training. The ASSIST program closed out in St. Lucia in 2010. TSA officials told us that TSA partnered with St. Lucia because it was the pilot country for the ASSIST program. The Capacity Development Branch did not want to pilot the ASSIST program in a country that was “ultra challenging” in terms of security deficiencies. Liberia was the second nation to partner with TSA under ASSIST. Liberia was chosen for ASSIST after President George W. Bush visited the nation in February 2008, pledging U.S. support in the area of aviation security. In addition, Delta Airlines wanted to reestablish service between the U.S. and Liberia and, in order to do so, Liberia’s national civil aviation program needed improvement. Liberia received a survey visit in April 2009. TSA conducted Essential Instructor Skills and Basic Screening Skills Training in May 2009. This training was followed by monthly visits to assess the impact of training and other technical assistance. In January 2010, TSA coordinated Fraudulent Document Detection training in conjunction with the U.S. Customs and Border Protection and Immigration and Customs Enforcement. In August 2010, TSA conducted its National Inspectors Training. The ASSIST program was closed out in Liberia in November 2010. Georgia, the third nation to partner with TSA under ASSIST, received a survey visit in September 2009. TSA coordinated its ASSIST program activities in Georgia with the European Civil Aviation Conference (ECAC). Georgia is a member state of ECAC and ECAC initiated a program of technical assistance in Georgia following its March 2009 audit of the Tbilisi Airport. In addition, TSA officials also told us that the State Department also requested that TSA work with Georgia. In April 2010, ECAC and TSA conducted ECAC’s Best Practices for National-level Auditors course. In August 2010, TSA conducted a review of passenger and baggage screening. The ASSIST program was closed out in Georgia in December 2010. TSA deployed an ASSIST program representative to Palau in August 2010. TSA officials told us that Palau was selected for the ASSIST program as a result of the results from the TSA foreign airport assessment program. In addition, Palau was a last point of departure to the United States and the host government was willing to engage TSA and make a commitment to sustain its aviation security enhancements. Currently, the ASSIST program is working with Haiti. Haiti was selected for ASSIST as a result of past program assessment recommendations. Specifically, in October 2010, the ASSIST team was in the process of conducting a training “needs assessment” in Haiti to determine what is needed to rectify aviation security deficiencies found by the program. In addition to the contact named above, Steve D. Morris, Assistant Director, and Christopher E. Ferencik, Analyst-in-Charge, managed this review. Wendy C. Johnson, Lisa A. Reijula, and Rebecca Kuhlmann Taylor made significant contributions to the work. Thomas F. Lombardi provided legal support. Stanley J. Kostyla and Minette D. Richardson assisted with design, methodology, and data analysis. Linda S. Miller provided assistance in report preparation. Tina Cheng helped develop the report’s graphics. | International flights bound for the United States continue to be targets of terrorist activity, as demonstrated by the October 2010 discovery of explosive devices in air cargo packages bound for the United States from Yemen. The Transportation Security Administration (TSA) is responsible for securing the nation's civil aviation system, which includes ensuring the security of U.S.-bound flights. As requested, GAO evaluated (1) the steps TSA has taken to enhance its foreign airport assessment program since 2007, and any remaining program challenges; (2) TSA's assessment results, including how TSA uses the results to guide future efforts; and (3) what opportunities, if any, exist to enhance the program. To conduct this work, GAO reviewed foreign airport assessment procedures and results, interviewed TSA and foreign aviation security officials, and observed TSA conduct a foreign airport assessment. While these interviews and observations are not generalizable, they provided insights on TSA's program. This is the public version of a sensitive report GAO issued in September, 2011. Information that TSA deemed sensitive has been omitted. Since 2007, TSA has taken a number of steps to enhance its foreign airport assessment program, some of which were taken in response to GAO's prior recommendations. For example, TSA updated its policies and methodologies used to guide and prioritize its assessment efforts, and implemented tools to track its annual assessment schedule, airport assessment results, and foreign government progress in resolving security deficiencies previously identified during the assessments. However, challenges remain in gaining access to some foreign airports, developing an automated database to better manage program information, prioritizing and providing training and technical assistance to foreign countries, and expanding the scope of TSA's airport assessments to include all-cargo operations. TSA has various efforts under way to address these challenges. Based on GAO's analysis of TSA's foreign airport assessments conducted from fiscal year 2006 through May 2011, some foreign airports complied with all of TSA's aviation security assessment standards; however, TSA has identified serious noncompliance issues at a number of foreign airports. Common areas of noncompliance included weaknesses in airport access controls and passenger and baggage screening. Moreover, GAO's analysis showed variation in airport compliance across geographic regions and individual security standards, among other things. For example, GAO's analysis showed that some number of regions of the world had no airports with egregious noncompliance while other regions had several such airports. However, TSA has not yet taken steps to evaluate its assessment results to identify regional and other trends over time. Developing a mechanism to evaluate its assessment results could help support TSA's priorities for aviation security training and technical assistance, inform its risk management decision making by identifying any trends and security gaps, and target capacity building efforts. Opportunities also exist for TSA to make additional program improvements in several key areas. For example, the agency has not developed criteria and guidance for determining foreign airport vulnerability ratings. This is particularly important given that these ratings are a key component for how TSA determines each foreign airport's risk level. Providing TSA decision makers with more specific criteria and definitions could provide greater assurance that such determinations are consistent across airports over time. In addition, there are opportunities for TSA to increase program efficiency and effectiveness by, for example, conducting more targeted foreign airport assessments and systematically compiling and analyzing security best practices. Taking such actions could help TSA better focus its assessments to address areas of highest risk, and identify security best practices and technologies that may be applicable to enhancing the security of both foreign and domestic airports. GAO recommends that TSA develop a mechanism to evaluate its assessment results to identify any trends, and target resources and future activities; establish criteria for determining foreign airport vulnerability ratings; and consider the feasibility of conducting more targeted assessments and compiling information on aviation security best practices. DHS agreed with the recommendations. |
During the energy boom of the early 1980s, BLM found that it could not handle the case processing workload associated with a growing number of applications for oil and gas leases. The bureau recognized that to keep up with increased demand, it needed to automate its manual records and case processing activities. Therefore, in the mid-1980s, it began planning to acquire an automated land and mineral case processing system. At that time, BLM estimated that the life-cycle cost of such a system would be about $240 million. In 1988 BLM expanded the scope of the system to include a land information system (LIS). The expanded system was to provide automated information systems and geographic information systems technology capabilities to support other land management functions, such as land use and resource planning. BLM combined the LIS with a project to modernize the bureau's computer and telecommunications equipment, and estimated the total life-cycle cost of this combined project to be $880 million. The project was reduced in scope in 1989 to respond to concern about the high cost and named the ALMRS/Modernization. The project consisted of three major components—the ALMRS IOC, a geographic coordinate database, and the modernization of BLM's computer and telecommunications infrastructure and rehost of selected management and administrative systems. Estimated life-cycle costs were $575 million (later reduced to $403 million), and BLM planned to complete the entire project by the end of fiscal year 1996. The ALMRS IOC was to be the flagship of the ALMRS/Modernization, and was to replace various manual and ad hoc automated systems. The bureau designated the ALMRS IOC a critical system for (1) automating land and mineral records, (2) supporting case processing activities, including leasing oil and gas reserves and recording valid mining claims, and (3) providing information for land and resource management activities, including timber sales and grazing leases. The system was expected to more efficiently record, maintain, and retrieve land description, ownership, and use information to support BLM, other federal programs, and interested parties. It was to do this by using the new computer and telecommunications equipment that was deployed throughout the bureau, integrating multiple databases into a single geographically referenced database, shortening the time to complete case processing activities, and automating costly manual records. Despite the promise of ALMRS IOC to significantly improve business operations, repeated problems with its development have prevented deployment. For example, during a user evaluation test in May 1996, problems were reported involving unacceptably slow system performance. Subsequent testing in 1996 uncovered 204 high-priority software problems, which delayed project completion by about a year. In testing conducted in November 1997, BLM encountered workstation failures and slowdowns caused by insufficient workstation memory and by problems discovered in two BLM-developed software applications. Some of these problems had been identified in earlier tests but had not been corrected. Additional testing uncovered software errors that resulted in missing, incorrect, and incomplete data, and error files that contained accurate data. As a result of these problems, BLM postponed the Operational Assessment Test and Evaluation (OAT&E) that had been scheduled for December 1997. The OAT&E was to determine whether ALMRS IOC was ready to be deployed to the first state office. In October 1998, the OAT&E was conducted and showed that ALMRS IOC was not ready to be deployed because it did not meet requirements. During the test, users reported several problems, including that ALMRS IOC (1) did not support BLM’s business activities, (2) was too complex, and (3) significantly impeded worker productivity. For example, one tester reported that entering data for a $10 sale of a commodity, such as gravel, required an hour of data entry using ALMRS IOC, whereas with the existing system, the same transaction would have taken about 10 minutes. Users also reported that system response time problems were severe or catastrophic at all test sites. One user said “It is ridiculous to spend 2 or 3 hours to enter information in this system, when it takes 30 minutes to an hour to process the information into the legacy system.” Finally, users reported data converted from legacy databases were not accurate, and that validation of the converted legacy data required inordinate effort and time. Because these problems are significant, senior BLM officials have decided that ALMRS IOC is not currently deployable. According to BLM, it obligated about $411 million on the ALMRS/Modernization project between fiscal years 1983 and 1998, of which more than $67 million was spent to develop ALMRS IOC software. The $67 million does not include ALMRS IOC costs that are part of other cost categories, such as costs for work performed from fiscal years 1983 through 1988, project management, computer and telecommunications hardware and software, data management, and systems operation and maintenance. The reported obligations associated with the major cost categories of the ALMRS/ Modernization are summarized in table 1. Senior BLM officials told us that although ALMRS IOC is not currently deployable, BLM has benefited from the ALMRS/Modernization work. BLM has deployed about 6,000 workstations throughout the bureau, provided office automation capabilities, and implemented a national telecommunications network with electronic mail and internet access, which has enhanced communications and enabled BLM to communicate with other federal agencies. BLM’s view of the benefits received, however, does not reflect the fact that it has not realized the significant business- related benefits and improvements ALMRS IOC was to provide. Mr. Chairman, since May 1995 we have reported many problems and risks that threatened the successful development and deployment of the ALMRS/ Modernization. Our reports have discussed these issues, their causes, and our recommended corrective actions. BLM has been slow to implement some of our recommendations and has not yet fully implemented others. Following is a summary of the problems, causes, and associated recommendations we have reported. BLM did not develop a system architecture or formulate a concept of operations before designing and developing the ALMRS/Modernization. A system architecture describes the components of a system, their interrelationships, and principles and guidelines governing their design and evolution. A concept of operations describes how an organization would use planned information technology to perform its business operations and accomplish its missions. Designing and developing the project without a system architecture and concept of operations unnecessarily increased the risk that the ALMRS/Modernization would not meet the business and information needs of the bureau. BLM has never had a credible project schedule, reliable milestones, or a critical path to manage the development and deployment of the ALMRS/ Modernization. As a result, BLM has not known with any certainty how long it would take and, therefore, how much it would cost to complete the ALMRS/Modernization. Because BLM has not implemented our recommendation to establish a credible project schedule, the ALMRS/ Modernization has been driven by self-imposed deadlines. In trying to meet those deadlines, BLM has deferred some tasks until after completion of the project, and has not corrected all problems when it found them because doing so would cause it to miss the self-imposed project deadlines. BLM faced serious risks because it had not established a robust configuration management program for the ALMRS/Modernization. Configuration management is essential to controlling the composition of and changes to computer and network systems components and documentation. The lack of configuration management increased the risks that system modifications could lead to undesirable consequences, such as causing system failures, endangering system integrity, increasing security risks, and degrading system performance. In response to our recommendation, BLM later developed a configuration management plan and related policies and procedures for the ALMRS/ Modernization. We planned to review field office implementation of the configuration management program after completion of the ALMRS IOC; however, we have not done so because the system was not deployed. BLM incurred serious risks because it had not established a security plan or security architecture for the ALMRS/Modernization. The lack of such security controls increased risks to the confidentiality, integrity, and availability of stored and processed data. BLM recently completed work in response to our recommendation. It performed a risk analysis, developed a system security plan and architecture, identified management and operational controls, and developed disaster and recovery plan procedures. As with configuration management, we planned to review field office implementation of the security program after completion of the ALMRS IOC, but have not done so because the system was not deployed. BLM invited serious risks because it had not established transition plans to guide the incorporation of ALMRS IOC into its daily operations. Deploying a major information system that people will use to do their jobs requires careful planning to avoid business and operational problems. Without transition plans, BLM increased the risk that using ALMRS IOC would disrupt, rather than facilitate, its work processes and ability to conduct land and mineral management business. In response to our recommendation, BLM developed transition plans; however, the plans were not adequate. They did not outline needed changes in organizational roles, responsibilities, and interrelationships, or address issues such as how state and subordinate offices would deal with oil and gas, mining, and solid mineral business process changes that would result from implementing ALMRS IOC. BLM faced serious risks because it had not established operations and maintenance plans. The lack of plans increased the risk that the bureau would not meet its automation objectives or the daily needs of its offices. BLM developed operations and maintenance plans in response to our recommendation. We expected to review field office implementation of the operations and maintenance plans after completion of the ALMRS IOC; however, we have not done so because the system was not deployed. BLM invited serious risks because it planned to stress test only the ALMRS IOC component—state and district offices, ALMRS IOC servers, terminals, and workstations. This increased the risk that BLM would deploy the ALMRS IOC nationwide without knowing whether the ALMRS/Modernization—ALMRS IOC, office automation, e-mail, administrative systems, and various departmental, state, and district software applications in a networked environment—would perform as intended during peak workloads. BLM agreed to fully stress test the entire ALMRS/Modernization before deploying the ALMRS IOC component throughout the bureau. BLM did not develop a Year 2000 contingency plan to ensure that critical legacy systems could operate after January 1, 2000, if the ALMRS IOC could not be delivered in 1999. We recommended that BLM develop a Year 2000 contingency plan to ensure continued use of those critical legacy systems ALMRS IOC was to replace. BLM implemented this recommendation and began executing the plan in 1998, when it became clear that ALMRS IOC would not be fully implemented by the end of 1999. At this point, BLM has made an enormous investment in software that does not meet its business needs. At the same time, it has not adopted information technology management practices required by recent legislation or suggested by industry best practices. Because of its large investment, BLM should analyze ALMRS IOC to determine whether the software can be cost-beneficially modified to meet the bureau’s needs. In addition, to reduce the risk that future information technology efforts will result in a similar outcome, BLM should assess its investment management practices and its systems acquisition capabilities. Until these assessments and subsequent improvement actions are taken, BLM will not be adequately prepared to undertake any sizable system acquisition. We believe that since BLM has invested over $67 million to develop the ALMRS IOC software, the bureau should thoroughly analyze the software to determine whether it can be modified to meet users’ needs and at what cost. This analysis should be part of an overall effort to identify and assess all viable alternatives, including (1) using or modifying ALMRS IOC software, (2) modifying or evolving existing land and recordation systems, (3) acquiring commercial, off-the-shelf software, or (4) developing new systems. The alternative analysis should clearly identify the risks, costs, and benefits of each alternative, and should be performed only after BLM is assured that it has fully verified its current business requirements. In this regard, senior BLM officials said they are performing an analysis to determine where ALMRS IOC failed to meet users’ expectations and critical business requirements. According to the acting land and resources information systems program manager, BLM is beginning to develop plans for future information technology modernization. These plans are to identify alternatives to deploying ALMRS IOC, and evaluate those alternatives based on cost, functionality, and return on investment. BLM also plans to document its current and planned business processes and systems architectures as part of this effort. While such planning is necessary, BLM also needs to assess its investment management practices to help avoid future problems. The Clinger-Cohen Act of 1996 seeks to maximize the return on investments in information systems by requiring agencies to institute sound capital investment decision-making. Under the act, agencies must design and implement a process for maximizing the value and assessing and managing the risks of information technology acquisitions. An information technology investment process is an integrated approach that provides for data-driven selection, control, and evaluation of information technology investments. The investment process is comprised of three phases. The first phase involves selecting investments using quantitative and qualitative criteria for comparing and setting priorities for information technology projects. The second phase includes monitoring and controlling selected projects through progress reviews at key milestones to compare the expected costs, risks encountered, and performance benefits realized to date. These progress reviews are essential for senior managers to decide whether to continue, accelerate, modify, or terminate a selected project. The third phase involves a postimplementation review or evaluation of fully implemented projects to compare actuals against estimates, assess performance, and identify areas where future decision-making can be improved. According to senior BLM officials, the bureau has established an Information Technology Investment Board to provide support for its capital planning processes. It intends to apply more rigorous, structured processes to analyze its information technology investments and select, control, and evaluate information technology investment alternatives. Until such processes are fully in place, the bureau cannot be assured that future investments will be properly selected, managed, and evaluated using sound investment criteria to provide effective support for the bureau’s mission and goals. Further, to ensure that information technology investment processes are carried out adequately, the Clinger-Cohen Act also requires agencies to assess the knowledge and skills of its executive and management staff to meet agencies’ information resources management requirements, and to take steps to rectify any deficiencies. The Software Engineering Institute (SEI) has identified the need for organizations to focus on information resources management capabilities. Organizations should improve their capabilities using a process to characterize the maturity of their workforce practices, guide a program of workforce development, set priorities for immediate actions, and establish a culture of software engineering excellence. According to senior BLM officials, the bureau examined the kind of skills that its field office computer specialists had, and identified the skills they would need. However, the officials recognize that this was not the same as the more comprehensive assessment suggested by SEI. Such assessments are needed to better identify and manage information technology investments. Consequently, the bureau should evaluate and, where needed, enhance the knowledge and skills of its staff to help ensure that the investment management processes it puts in place can be effectively carried out by its information resources management organization. Finally, the Clinger-Cohen Act requires agencies to develop, maintain, and facilitate the implementation of a sound and integrated information technology architecture. An information technology architecture provides a comprehensive blueprint that systematically details the breadth and depth of an organization’s mission-based mode of operation. An architecture provides details first in logical terms, such as defining business functions, providing high-level descriptions of information systems and their interrelationships, and specifying information flows; and second in technical terms, such as specifying hardware, software, data, communications, security, and performance characteristics. By enforcing an information technology architecture to guide and constrain a modernization program, an agency can preclude inconsistent systems design and development decisions, and the resulting suboptimal performance and excess cost. As I discussed earlier, BLM did not develop a system architecture before designing and developing the ALMRS/Modernization. This is a key reason why ALMRS IOC did not meet the bureau’s business needs. BLM still has not developed an architecture that documents its business processes and the technology and systems that support them. BLM needs to develop an information technology architecture to guide its future investment plans. Research by SEI has shown that defined and repeatable processes for managing software acquisition are critical to an organization’s ability to consistently deliver high-quality information systems on time and within budget. These critical management processes include project planning, requirements management, software project tracking and oversight, software quality assurance, software configuration management, and change control management. To assist organizations in evaluating and enhancing systems acquisition capabilities and processes, SEI has developed models for conducting software process assessments and software capability evaluations to determine the state of their capabilities and identify areas requiring improvement. BLM also needs an independent assessment of its systems acquisition capabilities, and must ensure that it uses sound systems acquisition processes. As I discussed earlier, BLM did not develop several key management controls for the ALMRS/Modernization. BLM did not develop a credible project schedule or develop adequate transition plans. In addition, the lack of a configuration management program, security plan and architecture, and operations and maintenance plans further increased BLM’s risks. These problems indicate the need for BLM to ensure that the deficiencies in its systems acquisition capabilities and processes are acknowledged and corrected. Until such assessments are completed and corrective action taken, BLM should not undertake any sizable systems acquisition or development efforts. Mr. Chairman, that concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Bureau of Land Management's (BLM) Automated Land and Mineral Record System project, also known as the ALMRS/Modernization, focusing on: (1) the history of the project; (2) the results of GAO's reviews, including the key reasons for problems; and (3) where GAO believes BLM should go from here. GAO noted that: (1) BLM spent over 15 years and estimates that it invested about $411 million planning and developing the ALMRS/Modernization, only to have the major software component--known as the ALMRS Initial Operating Capacity (IOC)--fail; (2) as a result of that failure, the bureau decided not to deploy ALMRS IOC at this time; (3) GAO has previously reported on the significant problems and risks that BLM has encountered; (4) GAO has made many recommendations to reduce those risks; however, BLM has been slow to implement some recommendations and has not yet fully implemented others; (5) BLM now needs to determine whether it can salvage any of the more than $67-million reported investment in ALMRS IOC software, by analyzing the software to determine if it can be cost-beneficially modified to meet BLM's needs; and (6) in addition, to reduce the risk that future efforts will result in similar failures, BLM should assess its information technology investment practices and systems acquisition capabilities. |
Over the last 10 years, DOD prime contract and total subcontract dollar awards have increased. From 1993 to 2002, DOD prime contract dollars increased almost 15 percent, from $136.8 billion to $157.1 billion. As shown in table 1, total subcontract dollars awarded by DOD contractors increased more than 40 percent, from $53.0 billion to $75.5 billion. In addition, small businesses have generally received increasing dollar amounts from DOD contractors over a 10-year period—from $19.9 billion in fiscal year 1993 to $25.8 billion fiscal year 2002. However, as shown in figure 1, small businesses’ share of total subcontract dollars from DOD contractors has decreased in recent years. The percent share that small business received has ranged from a high of about 43 percent ($21.7 billion) in fiscal year 1995 to a low of about 34 percent ($25.8 billion) in fiscal year 2002. In order to foster the participation of small businesses in subcontracting, the Federal Acquisition Regulation (FAR) requires DOD contractors to have subcontracting plans for most contracts of more than $500,000 ($1 million for construction contracts). These plans document what actions the contractor will take to provide various types of small businesses with the maximum practicable opportunities to participate in subcontracting. See appendix II for description of small business categories. Contractors with DOD are to provide semiannual reports to DCMA on their small business achievements for each contract that has a subcontracting plan as well as semiannual summary reports that encompass all their contracts with a particular agency. The National Defense Authorization Act for Fiscal Years 1990 and 1991 authorized DOD to establish the Test Program for Negotiation of Comprehensive Small Business Subcontracting Plans (Test Program), which allowed the negotiation, administration, and reporting of subcontracting plans on a plant, division, or companywide basis rather than a plan for each individual contract. The purpose of the Test Program is to increase subcontracting opportunities for various types of small businesses while reducing the administrative burdens on contractors. The companies that participated in this Test Program in fiscal year 2002 accounted for about 41 percent of DOD’s subcontracting activity in that same fiscal year. The Office of the Under Secretary of Defense, Office of Small and Disadvantage Business Utilization is responsible for the overall assessment of the Test Program. Originally scheduled for fiscal years 1991 through 1992, the Test Program has been extended several times and is scheduled to end September 30, 2005. Under the Test Program, small business goals are negotiated annually, whereas for individual plans, goals are generally negotiated once for the life of the contract. As of fiscal year 2003, 15 contractors have comprehensive plans under the Test Program. DCMA is responsible for reviewing DOD contractors’ subcontracting plans and monitoring and assessing contractor’s performance to determine how well contractors are implementing their plans and meeting their small business goals. DCMA is also involved in annually negotiating goals with contractors participating in the Test Program. Since 1982, DOD has required prime contractors to report quarterly to DOD’s Office of Program Acquisition and International Contracting on contracts exceeding $500,000 when the contractor or its first tier subcontractor will perform any part of the contract that exceeds $100,000 outside the U.S., unless a foreign place of performance (1) is the principal place of performance and (2) is identified in the firm’s offer. First-tier subcontractors that award subcontracts in excess of $100,000 to be performed outside the U.S. are also subject to the reporting requirement. Reported information is to include the type of supply or service provided, the principal place of subcontract performance, and the dollar value of the transaction. The information is used as part of DOD’s efforts to monitor foreign procurements and assess matters related to defense trade balances and domestic industrial base capabilities. DOD’s Office of Program Acquisition and International Contracting reports to the Director of Defense Procurement and Acquisition Policy. Although the Test Program has been in existence since fiscal year 1991, DOD does not know if it is achieving its intended objectives to provide more small business subcontracting opportunities and to reduce administrative burden for contractors. The Office of the Under Secretary of Defense, Office of Small and Disadvantage Business Utilization, which is to report the results of the Test Program in December 2005, shortly after the program is set to expire, commissioned a preliminary study of the program in 2002. The data assessing the merits of the program were never formally released, but the resulting preliminary report had a number of recommendations. DOD recognizes that it needs to establish metrics and other criteria for measuring program results in meeting the intended objectives. We found that DOD and contractor officials have various views on the strengths and weaknesses of the program. To assess the Test Program, DOD commissioned a preliminary review of the program by the Logistics Management Institute (LMI). LMI noted in its draft report that, in terms of achievements for subcontracting to small businesses, the Test Program results improved impressively between 1991 and 1996—from small businesses receiving 12 percent of total subcontracts to receiving about 36 percent—but declined to about 29 percent by 2000. LMI attributed the decline to factors external to the program—some of which we discuss later. Most of its recommendations dealt with addressing ways of improving small business achievements, but also included program-specific recommendations, such as increasing visibility of subcontracting activity at the corporation’s division and program level, where feasible; deducting directed-source procurements from subcontracting allowing subcontracting plan renegotiations to reflect major contract awards that occur after negotiations; establishing annual meetings of program participants and DCMA to allow exchange of ideas, best practices, and lessons learned; permitting removal of poor performing participants after appropriate notice; requiring participants to track and annually report administrative savings and costs and results of their outreach activities; and limiting enrollment to 20 participants. While the final report has not been issued, DOD officials said they have taken into consideration a number of the recommendations from the study by LMI. For example, DCMA has taken steps to improve oversight of contractor performance and hold contractors more accountable for achieving their subcontracting goals, and DOD has chartered a council to share Test Program knowledge and experience. In addition, some DOD program offices require contractors to report on their subcontracting activity at the program level to increase visibility of subcontracting to small businesses. Despite DOD’s attempts to assess the program, it still does not know whether using the Test Program is affecting subcontracting opportunities for small businesses and reducing administrative burden for the contractors. DOD, through DCMA, is to report on each participating contractor’s performance by December 15, 2005 by comparing the contractor’s performance under the program with its performance for 3 fiscal years before the acceptance into the program. DOD officials told us they are uncertain how they will measure contractors’ performance to meet their reporting requirement and assess trends over time. This uncertainty is in part due to not having the original participants in the program to establish a baseline to evaluate performance and changes in company compositions. Further, these officials noted that mergers and acquisitions can greatly change company compositions and business bases from year to year making trend determinations difficult. DCMA officials told us they plan on hiring a contractor to help them complete their review of the overall results of the Test Program and will use the results of the LMI study as a tool to help develop Test Program metrics. DCMA and contractor officials we interviewed gave varied opinions—both positive and negative—on the Test Program. Some said that while they were uncertain about its increasing small business opportunities, they thought participating in the Test Program helped increase the visibility of the results of small business program companywide or divisionwide. Others said the comprehensive plan sometimes resulted in lost visibility of individual contract performance and reduced accountability at the program level. In fact, one contractor recently stopped participating in the program because of the lost ability to monitor individual contract performance. DCMA and contractor officials we interviewed said they were uncertain if there had been a reduction of administrative burden since, for example, under the Test Program contractors were required to prepare a detailed plan, negotiate small business goals each year, and submit performance data semiannually. Plus, certain large DOD programs requested contractors to report small business data. Many agreed that, regardless of what type of plan contractors used, success of the small business program relies on contractor management’s commitment to meeting small business goals. DCMA and contractor officials also stated that contractor management must have the ability to monitor company performance on those goals. Between fiscal years 1999 and 2003, the DOD contractors we reviewed had varied success in meeting their small business goals. DOD and contractor officials provided several reasons for the mixed success of the subcontracting program, but DOD has not formally studied those factors that may encourage or discourage the participation of small businesses in DOD subcontracts. In the past 5 years, the 15 DOD contractors participating in the Test Program had varying success in meeting their small business goals established in their subcontracting plans. Overall, the contractors in the Test Program were not consistent from year to year in meeting their goals for the traditional small business categories. For example, in at least 3 of the past 5 years, 11 of the 15 contractors met their overall small business goals, seven contractors met their goals for small disadvantaged businesses, and six contractors met their goals for women-owned small businesses. DOD and contractor officials noted that a changing acquisition environment has added to the challenge in meeting their small business goals. Changes included (1) the increased breadth, scope, and complexity of DOD prime contracts that require, among other things, teaming arrangements with other, typically large contractors and (2) prime contractors’ strategic sourcing decisions to leverage their purchasing power by reducing the number of their suppliers including small businesses. Contractor officials also said that the relatively limited supply of qualified small businesses that could provide the needed goods and services also increases the difficulty in meeting small business goals. DOD has not studied to what degree the changing acquisition environment or other factors contribute to the success or failure of its small business subcontracting program. Contractor and DCMA officials report that the breadth, scope, and complexity of DOD prime contracts for weapons systems has increased over the years. According to officials, this has had several consequences, which have limited the opportunities for small businesses. First, prime contractors are increasingly relying on teaming arrangements to win contracts. Their teaming partners, typically large businesses, receive a sizable portion of the first-tier subcontracts. For example, under a major defense contract, the contractor awarded about 56 percent of its total subcontract dollars to its teaming partners, significantly reducing the opportunities of small businesses to win first-tier subcontracts. Also, prime contractors are increasingly serving as systems integrators instead of systems manufacturers and are buying major assemblies rather than parts and components. Systems integrators are often responsible for the development, management, and eventual delivery of a large weapon system. Consequently, as in the case of teaming arrangements, systems integrators often use large businesses as first-tier subcontractors. Contractor officials said that although small businesses may still be receiving contract dollars through second- or lower-tier subcontracts, contractors could only count their first-tier subcontract awards towards their small business goals. In addition, many contractors have made the strategic-sourcing decision to reduce the number of suppliers in their supplier base. Contractors report reducing their supplier bases by as much as 50 percent over the past 5 years in a move to leverage their purchases, cut costs, and improve performance to remain competitive in the world market. Contractors also noted that by reducing the number of contractors, they often relied on larger corporatewide contracts, which could also affect their small business suppliers. For example, officials of one contractor noted that when it went to a single information systems contractor, it no longer contracted with a number of small firms. Finally, contractors report difficulty in finding qualified small businesses to provide the goods and services needed. Contractor officials said this is particularly true for small business programs with certification requirements—such as the programs for small disadvantaged businesses and Historically Underutilized Business Zone (HUBZone) businesses—and for very recent programs, such as the service-disabled veterans program. The Small Business Administration has certified significantly fewer small disadvantaged businesses and HUBZone firms than hoped. Consequently, contractors often have difficulty meeting small disadvantaged business goals, and few have met their HUBZone goals. Further, according to DCMA officials responsible for on site monitoring of subcontracting plans, qualified businesses in different small business categories usually compete for the same type of work. Consequently, according to these DCMA officials, contractors have difficulty meeting goals for all small business types and often report wide fluctuations in subcontracting achievements among the groups, depending on which ones win contracts in a given year. The categories of small business that DOD uses include small businesses, small disadvantaged businesses, women-owned small businesses, veteran- owned and service-disabled veteran-owned small businesses, HUBZone businesses, Historically Black Colleges and Universities, and Minority Institutions. Since 2002, DCMA has taken steps to help improve its oversight of DOD’s small business program. These steps include issuing an updated policy for monitoring contractors’ small business subcontracting programs, issuing new guidance to help DCMA personnel in implementing small business program requirements, and developing new criteria for rating contractor performance. Previously, DCMA, through its small business specialists, carried out its small business subcontracting program responsibilities through (1) contractor orientation and training, (2) small business outreach and “matchmaking,” (3) Test Program review and negotiation, and (4) contractor performance evaluations. Training primarily involved informing the contractors and other DCMA personnel of contractor responsibilities and small business program requirements. Outreach and “matchmaking” activities included attending or arranging small business conferences and open houses and identifying qualified small businesses to contractors. DCMA policies and procedures also required small business specialists to review contractors’ subcontracting performance and perform two kinds of reviews: annual reviews of Test Program participants and reviews of contractor subcontract performance. Test Program plan reviews—annually assess each contractor participating in the Test Program. The review includes determining how well the contractor is performing under the plan, including whether it met its goals for the year. However, these reviews do not result in an overall rating. Contractors’ subcontract-performance reviews—assess all DOD contractor facilities with subcontracting plans, whether comprehensive or individual. In general, DCMA reviews the DOD contractors it is responsible for monitoring on an annual basis. The review assesses contractor policies and procedures, outreach activities, record keeping and reporting procedures, training that contractor personnel received to implement their small business subcontracting program, and contractor performance on meeting small business goals. DCMA assigned ratings on a 5-point scale from “outstanding” to “unsatisfactory.” DCMA small business specialists said that because the rating criteria were loosely defined, contractors could receive different ratings depending on the interpretation of the small business specialist. For example, in fiscal year 2001, one company’s performance received a “highly successful” rating even though it had not met any of its three long-standing small business goals for that period of the review. In fiscal year 2002, DCMA rated another company’s performance as “unacceptable” although it had demonstrated similar performance on its goals. DCMA’s new policy and guidance emphasizes the agency’s oversight function. In July 2003, DCMA published an updated policy for monitoring contractors’ small business subcontracting programs. While DCMA continues to conduct its reviews under its revised policy, it created more specific criteria for determining contractor performance. The criteria particularly emphasize contractors’ small business goal achievements and contractor accountability, including the contractors participating in the Test Program. For example, under DCMA’s new rating criteria, to receive a “highly successful” performance rating, the contractor must meet three long-standing small business goals and at least one of the newer goals (e.g. veteran-owned small business) as well as demonstrating significant success in other initiatives identified in its subcontracting plan. In September 2003, DCMA published a new procedural guide to assist DCMA Small Business Specialists in implementing the small business program. For example, the guidance provides factors, such as a contractor’s past performance, that should be considered when negotiating goals with Test Program participants. DCMA continues to assess the oversight of the Test Program and whether further changes need to be made. Other steps DCMA has taken that allow the more efficient use of its resources include establishing a risk-based approach to its reviews of contractors and limiting its training and outreach functions. The risk- based approach allows DCMA to skip a review of a contractor for 1 year if the contractor’s previous year’s rating was “outstanding,” there were no significant changes in their contracting activity, and there were no significant personnel changes affecting the contractor’s small business program. In addition, according to DCMA officials, DCMA is significantly limiting its training and outreach functions on the basis that other organizations, such as the Small Business Administration and Procurement Technical Assistance Centers, already provide these services. We could not determine the extent of subcontracting to firms performing outside the U.S. because of inconsistent reporting of subcontracting activities by contractors and poor database management by DOD. According to the contractors in our review, most subcontracts to firms performing outside the U.S. accounted for a small percentage of their total subcontract dollars. Further, the contractors stated that most of the dollars to firms performing outside the U.S. were awarded on a noncompetitive basis. These contractors reported several reasons for awarding subcontracts to firms performing outside the U.S in fiscal year 2002. We could not assess the full extent that defense contractors’ subcontract with firms performing outside the U.S. In November 1998, we reported that DOD’s Office of Program Acquisition and International Contracting did not have safeguards for ensuring the completeness and accuracy of its database of subcontracts to firms performing outside the U.S. At that time, we found instances in which DOD contractors did not report their subcontracts to firms performing outside the U.S. in accordance with DOD’s reporting requirements because they were unaware of the reporting requirements or misunderstood the criteria for reporting this type of subcontract. Plus, we identified that DOD lacked standards and procedures for managing this database. In October 2003, during our review, the Director of Defense Procurement and Acquisition Policy—through the Office of Program Acquisition and International Contracting—began to take the following actions to address contractor compliance sent letters to the top 100 parent companies of DOD contractors to remind them about DOD reporting requirements for subcontracts to firms performing outside the U.S. and requested they ensure all their subsidiaries also comply with this reporting requirement, sent a memorandum to the Senior Acquisition Executives of the Military Department and the Defense Agencies requesting they remind their contracting officers of the reporting requirement, engaged in outreach efforts with government and industry personnel to help ensure this effort to improve contractor compliance was fully communicated, sent a memorandum to DCMA requesting its assistance in periodically verifying that contractors are complying with the reporting requirements, and clarified reporting requirements for subcontracts to firms performing outside the U.S. The Office of Program Acquisition and International Contracting intends to perform periodic verification of reporting of subcontracts to firms performing outside the U.S. and is in the process of establishing those procedures. Because no action had been taken to improve data reliability until recently, we could not rely on the data available to determine the extent that DOD contractors were subcontracting with firms outside the U.S. Contractors at four of the five locations we visited spent between approximately 2 and 6 percent of their total DOD subcontracting dollars in fiscal year 2002 on subcontracts to firms performing outside the U.S. The fifth contractor subcontracted about 18 percent of its subcontracting dollars with firms performing outside the U.S. in fiscal year 2002 due to a teaming arrangement for a large defense contract it was awarded. According to a contractor official, this percentage would more typically be around 10 percent. At the five contractor locations, the total subcontract dollars to firms performing outside the U.S. ranged between approximately $29 million and $1.9 billion in fiscal year 2002. These subcontracts were for items such as parts for military systems, communication equipment for satellites, components for military aircraft, and sensors for satellite weather forecasting. While one contractor reported awarding most of its subcontract dollars to firms performing outside the U.S. on a competitive basis in fiscal year 2002, four contractors reported awarding the majority of their subcontract dollars non-competitively. Consequently, small businesses generally did not have the opportunity to compete for these types of subcontracts. Contractor officials said that even when their subcontracts with firms performing outside the U.S. were competed, they were not necessarily for the type of products that small businesses had the expertise or technology to provide. For example, one contractor competitively awarded a contract for an amplifier used in communication equipment to a firm outside the U.S. The contractor did not identify or solicit small businesses in the competition because of the unique technology and expertise required for that particular amplifier. Contractor officials said the reasons for the awards to firms performing outside the U.S. in fiscal year 2002 include: Directed source—Contractor officials stated some subcontracts were awarded to companies outside the U.S. because DOD directed them to subcontract with a certain supplier. For example, a prime contractor was directed by DOD to award a subcontract to a company outside the U.S. to produce a sensor for a weather forecasting satellite because the company previously had a contract directly with the U.S. Government. Offset agreements—The contractors said that to sell military goods and services to other countries, they often have to form agreements with foreign countries that necessitate subcontracting with foreign firms to some degree. For example, one U.S. prime contractor awarded a subcontract to a firm in a foreign country because a prior offset agreement required the contractor to purchase about $1 billion in goods and services from firms in that country. The $32.3 million subcontract was for a structural frame for the troop ramp and an air deflector for the C-17 transport aircraft. International agreements—Sometimes subcontracts are awarded to companies outside the U.S. because of international agreements between the U.S. and foreign countries. For instance, a contractor awarded a series of subcontracts to firms performing outside the U.S. based on an international agreement in which a 13-nation consortium contributed to the development of components for a missile to be used by these nations. Some of the components produced by the various countries included control systems, rocket motors, and guidance systems. Team Arrangements—This is an arrangement where two or more contractors form a partnership or joint venture to act as a potential prime contractor or a potential prime contractor agrees with one or more other contractors to have them act as its subcontractors under a specified Government contract or acquisition program. Product specialization—Contractor officials said it was very expensive to develop and change suppliers of specialized parts; therefore, DOD contractors typically continue to award contracts to the same supplier that originally supplied the products. That supplier may be located outside the U.S. For instance, one contractor awarded a subcontract to such a supplier because it was the only one that had a specification drawing for the production of pedestals for a radar system. In another case, a DOD contractor awarded a subcontract to a company outside the U.S. because it was the only supplier that already had the tools and the expertise to manufacture and produce a horizontal stabilizer for the F-5 aircraft. Because of its large contracting operations, DOD is critical to the success of federal programs designed to provide opportunities for small businesses. DOD has recognized the importance of its role in federal contracting; has taken limited steps to help improve opportunities for small businesses, such as the Test Program; and has revised DCMA guidance to hold contractors more accountable for their small business goals. However, after 12 years of implementing the Test Program, DOD does not know whether these initiatives are effective. While DOD has collected data over the years, it has not established metrics to evaluate the effectiveness of the Test Program. As a result, there is no systematic way of determining whether the program is meeting its intended objectives and whether further changes need to be made. In addition, the reliability of the data submitted by contractors on their subcontracts to firms performing outside the U.S. remains a concern. DOD has only recently started to take action on improving its data collection and has yet to establish procedures for validating the information. Without accurate and complete information on subcontracts to firms performing outside the U.S., DOD cannot make informed decisions on industrial base issues. We are making the following two recommendations to the Secretary of Defense: In order to evaluate the effectiveness of the Test Program, we recommend the Secretary of Defense direct the Office of the Under Secretary of Defense, Office of Small and Disadvantage Business Utilization, to develop metrics to assess the overall results of its Test Program. Also, to ensure DOD has the information it needs to accurately determine the number and dollar amount of subcontracts to firms performing outside the U.S., we recommend the Secretary of Defense direct DOD’s Office of Program Acquisition and International Contracting to establish procedures to improve the quality of the information in its database of subcontracts performed outside the U.S. DOD provided us with written comments on a draft of this report. DOD concurred with our findings and recommendations and noted some additional actions it took or is taking to address our recommendations. We incorporated these actions in this report where appropriate. DOD’s comments appear in appendix III. As requested by your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that point, copies of this report will be sent to interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841, or Hilary Sullivan at (214) 777-5652, if you have any questions regarding this report. Major contributors to this report were Vijay Barnabas, David Bennett, Frederick Day, Michael Gorin, Gary Middleton, Pauline Reaves, Sylvia Schatz, and Suzanne Sterling. To determine DOD’s assessment of the Test Program’s effectiveness, we reviewed legislation, regulations, directives, and policies regarding this program. We also reviewed a July 2002 study conducted by LMI for DOD that looked at the overall results of the Test Program. In addition, we met with officials at DCMA headquarters, district, and field locations as well as officials at selected contractor locations to discuss their views on the advantages and disadvantages of the Test Program. To determine the performance of contractors participating in the Test Program, we collected data on the 15 DOD contractors (i.e., parent companies or their subsidiaries) participating in the Test Program. More specifically, we obtained 5 years of small business goal and performance data, fiscal years 1999 to 2003, on the extent that the contractors were meeting their small business goals from DCMA headquarters and district officials as well as contractor officials. The contractors in the Test Program as of fiscal year 2003 are the following The Boeing Company; General Electric Aircraft Engines; Harris Corporation, Government Communications Systems Division; Lockheed Martin Aeronautics Company; Lockheed Martin Simulation, Training & Support (formerly Lockheed Martin Missiles & Fire Control; Lockheed Martin Space Systems Company; Northrop Grumman Air Combat Systems; Northrop Grumman Electronic Systems and Sensors; Raytheon Company; Textron Systems, a Textron Company; Bell Helicopter Textron Inc.; United Technologies Corp, Hamilton Sundstrand Division; United Technologies Corp, Pratt & Whitney Government Division; and United Technologies Corp, Sikorsky Aircraft Division. To determine DCMA’s oversight of contractors’ small business subcontracting efforts, we met with officials at DCMA headquarters, district, and field locations as well as officials at selected contractor locations to identify and discuss DCMA’s role. We also gathered information on updated policy and guides for monitoring contractors’ small business subcontracting programs and new criteria for rating contractor performance. We limited our review of internal controls to reviewing DCMA’s plans, methods, and procedures used to meet its small business subcontracting program mission, goals, and objectives. To determine the reasons and extent contractors are subcontracting with businesses performing outside the U.S. we identified the contractors’ rationale with officials at the five selected contractor locations. We also gathered information for the most current year that data was available, fiscal year 2002, from contractor officials at same five locations. We did not independently verify this data. In addition, we reviewed the steps DOD had taken to address past database deficiencies and discussed recent changes at DOD’s Office of Program Acquisition and International Contracting on their management of the database of subcontracts performed by contractors outside the U.S. We conducted our review between March 2003 and March 2004 in accordance with generally accepted government auditing standards. Appendix II: Small Business Concern Categories A small business concern is one that is independently owned and operated and is not dominant in its field of operation. 15 U.S.C. 632(a)(1). A small business concern is further defined as (1) a business entity that is organized for profit; (2) with a place of business located in the U.S.; and (3) which operates primarily within the U.S. or which makes a significant contribution to the U. S. economy through tax payments or use of American products, materials, or labor; and (4) meets the size standard for its primary business activity or industry as designated by the applicable North American Industry Classification System (NAICS) codes. 13 C.F.R. 121.101(a); 121.105(a); FAR 19.001. A small disadvantaged business is a small business concern that is 51% or more owned by one or more socially and economically disadvantaged persons who manage and operate the concern. 15 U.S.C. 637(d)(3)(C). Black Americans, Hispanic Americans, Asian Pacific Americans, Subcontinent Asian Americans, and Native Americans are presumed by regulation to be socially disadvantaged. 13 C.F.R. 124.103(b). Other individuals can qualify if they show by a “preponderance of the evidence” that they are socially disadvantaged. 13 C.F.R. 124.103(c). A small disadvantaged must also (1) meet SBA’s established size standard for its main industry; and (2) have principals who have a net worth, excluding the value of the business and personal home, of less than $750,000. 13 C.F.R. 124.1002(b) (c). A woman-owned business is a small business concern that is 51% owned by one or more women who manage and operate the concern. 15 U.S.C. 637(d)(3)(D); FAR 2.101. A veteran-owned business is a small business concern that is 51% owned by one or more veterans who manage and operate the concern. 15 U.S.C. 637(d)(3)(E); FAR 2.101. A service-disabled veteran-owned business is a small business concern that is 51% owned by one or more service-disabled veterans who manage and operate the concern. 15 U.S.C. 632(q)(2); FAR 2.101. A HUBZone is a small business concern that (1) meets SBA’s size standards for its primary industry classification; (2) is owned and controlled by one or more U.S. citizens; (2) has a principal office located in a HUBZone (a historically underutilized business zone, which is in an area located within one or more qualified census tracts, qualified non- metropolitan counties, or lands within the external boundaries of an Indian reservation); and (3) has at least 35 percent of its employees residing in a HUBZone. 15 U.S.C. 632(p)(3) (5); 13 C.F.R. 126.103; 126.203. A historically black college or university means an institution determined by the Secretary of Education to meet the requirements of 34 C.F.R. 608.2. FAR 2.101. A minority institution is an institution of higher education whose enrollment of a single minority or a combination of minorities (American Indian, Alaskan Native, Black, and Hispanic—Mexican, Puerto Rican, Cuban, and Central or South American) exceeds 50 percent of the total enrollment. FAR 2.101; 20 U.S.C. 1067k(2) (3). Joint Strike Fighter Acquisition: Cooperative Program Needs Greater Oversight to Ensure Goals Are Met. GAO-03-775. Washington D.C.: July 21, 2003. Sourcing And Acquisition: Challenges Facing the Department of Defense. GAO-03-574T Washington D.C.: March 19, 2003. Small Business Contracting: Concerns About the Administration’s Plan to Address Contract Bundling Issues. GAO-03-559T. Washington D.C.: March 18, 2003. Small Business Administration: The Commercial Marketing Representative Role Needs to Be Strategically Planned and Assessed. GAO-03-54. Washington D.C.: November 1, 2002. Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services. GAO-02-230. Washington D.C.: January 18, 2002. Small Business Subcontracting Report Validation Can Be Improved. GAO-02-166R. Washington D.C.: December 13, 2001. Small Business: More Transparency Needed in Prime Contract Goal Program. GAO-01-551. Washington D.C.: August 1, 2001. Small Business: Status of Small Disadvantaged Business Certifications. GAO-01-273. Washington D.C.: January 19, 2001. Small Business: Trends in Federal Procurement in the 1990s. GAO-01-119. Washington D.C.: January 18, 2001. Defense Trade: Observations on Issues Concerning Offsets. GAO-01-278T. Washington D.C.: December 15, 2000. Defense Trade: Weaknesses Exist in DOD Foreign Subcontract Data. GAO/NSIAD-99-8. Washington D.C.: November 13, 1998. | More small businesses are turning to subcontracting as a way to participate in the federal government's $250 billion procurement program. DOD, accounting for about two-thirds of federal procurements, has a critical role in providing opportunities to small businesses through subcontracting programs such as the Test Program for Negotiation of Comprehensive Small Business Subcontracting Plans (Test Program). In addition, Congress raised concerns about the potential for small businesses to lose opportunities to firms performing work outside of the United States. GAO was asked to review (1) DOD's assessment of the Test Program's effectiveness, (2) the performance of contractors participating in the Test Program, (3) the Defense Contract Management Agency's (DCMA) oversight of contractors' small business subcontracting efforts, and (4) the extent and reasons contractors are subcontracting with businesses performing outside the U.S. In order to foster small business participation in subcontracting, government contractors with larger dollar value contracts are required to have subcontracting plans that establish goals for contractors to award small businesses a percentage of subcontract dollars. DOD created the Test Program to provide more small business opportunities and reduce the administrative burden for contractors in managing their subcontracting programs. Many of DOD's largest contractors participate in the program. A lthough the Test Program was started more than 12 years ago, DOD has yet to establish metrics to evaluate the program's results and effectiveness. As a result, there is no systematic way of determining whether the program is meeting its intended objectives and whether further changes need to be made. DOD contracted for an assessment of the Test Program in 2002, but the results of the assessment are considered preliminary and, therefore, have not been reported. DOD is required to report the results of the Test Program in 2005, when the program is set to expire. DOD contractors participating in the Test Program have experienced mixed success in meeting their various small business subcontracting goals. DOD and contractor officials noted that a changing acquisition environment has added to their challenge in meeting small business goals. Two of the major challenges they identified include (1) the increased breadth, scope, and complexity of DOD prime contracts that require, among other things, teaming arrangements with other, typically large contractors and (2) prime contractors' strategic sourcing decisions to leverage their purchasing power by reducing the number of their suppliers including small businesses. DCMA plays a key role in overseeing the performance of contractors in the Test Program and has made significant changes to its policy and guidance. The revised approach is designed to better monitor contractors' efforts, provide more consistency in assessing contractor performance, and hold contractors accountable for achieving their subcontracting goals. DCMA is still in the process of revamping its oversight activities. GAO could not assess the full extent contractors used firms performing outside the U.S. because of data reliability concerns. Contractors in GAO's review reported several reasons for awarding subcontracts to firms performing outside the U.S., such as fulfilling commitments included in offset agreements or executing teaming arrangements for major defense programs. Without accurate and complete information on subcontracts to firms performing outside the U.S., DOD cannot make informed decisions on industrial base issues. |
As part of our audit of the fiscal years 2003 and 2002 CFS, we evaluated Treasury’s financial reporting procedures and related internal control. In our report, which is included in the fiscal year 2003 Financial Report of the United States Government, we reported material deficiencies relating to Treasury’s financial reporting procedures and internal control. These material deficiencies contributed to our disclaimer of opinion on the CFS and also constitute material weaknesses in internal control, which contributed to our adverse opinion on internal control. We performed our work in accordance with U.S. generally accepted government auditing standards. This report provides the details of the additional weaknesses we identified in our audit of the fiscal year 2003 and 2002 CFS and recommendations to correct those weaknesses. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB’s and Treasury’s comments are reprinted in appendix III and IV, respectively, and discussed in the Agency Comments and Our Evaluation section of this report. Treasury also provided an attachment to its written comments that we did not reprint in appendix IV. This attachment was a detailed reconciliation spreadsheet that was an expanded version of the information we had already taken into account in our review of the fiscal year 2003 reconciliation statement. Statement of Federal Financial Accounting Standard (SFFAS) No. 4, Managerial Cost Accounting Standards and Concepts, states that a fundamental element of managerial cost accounting for the federal government is the use of appropriate costing methodologies to accumulate and assign costs to outputs. The standard further states that costs should be allocated on a reasonable and consistent basis. Without consistently applying an allocation methodology, the net cost amounts by federal agency, as shown on the Statement of Net Cost, may be misstated. The Statement of Net Cost is intended to present the net cost of the U.S. government’s operations. These costs are presented in the statement by individual federal agencies rather than by significant federal government program. The reported net cost amounts by federal agency include an allocated portion of the Office of Personnel Management (OPM) costs. This allocation is made to reflect the fair share of the cost of the functions performed by OPM that benefit other federal agencies, most notably, pension payments to federal retirees. As the basis for allocating OPM costs to each federal agency, Treasury’s written procedures call for the use of full-time equivalents (FTE). Those FTEs are published in the Analytical Perspectives, Budget of the United States Government, fiscal year 2005. During our fiscal year 2003 audit, we found that the FTEs used for allocating OPM costs to some of the federal agencies listed in the Statement of Net Cost did not always agree with the respective agencies’ FTEs in the Analytical Perspectives, Budget of the United States Government, fiscal year 2005. In addition, we found that there was no review of the underlying support used to compile the Statement of Net Cost by Treasury management to ensure that OPM costs were allocated accurately. Treasury was not able to explain the differences we identified. We also found that Treasury’s written procedures for allocating OPM costs on the Statement of Net Cost were not updated to reflect the changes Treasury made to its allocation methodology during fiscal year 2003. We also found that Treasury made errors in allocating OPM costs to the Department of Homeland Security (DHS). Most of the errors occurred because Treasury allocated a full year of OPM costs to DHS, even though DHS did not begin operations until March 2003. DHS was originally allocated 5.3 percent of OPM costs and after we notified Treasury of the errors we identified, DHS was correctly allocated 2.6 percent. Recommendations for Executive Action. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to ensure that, if FTEs are used as part of Treasury’s methodology for allocating OPM costs, the FTEs used for the agencies listed on the Statement of Net Cost agree with the FTEs listed in the Analytical Perspectives, Budget of the United States Government as currently stated in Treasury’s methodology; document any changes to the stated methodology for allocating OPM costs and the rationale for these changes; and require reviews by Treasury management of the accuracy of the allocated OPM costs. As part of our fiscal year 2003 audit of the Statement of Changes in Cash Balance from Unified Budget and Other Activities (Statement of Changes in Cash Balance), we found (1) material differences between the net outlay records used by Treasury to prepare the Statement of Changes in Cash Balance and the total net outlays reported in selected federal agencies’ audited Statements of Budgetary Resources (SBR); (2) that the Statement of Changes in Cash Balance reported only the changes in the “operating” cash of the U.S. government rather than all cash, as it is reported on the U.S. government’s Balance Sheet; and (3) that the major program activities of the U.S. government relating to direct and guaranteed loans extended to the public were reported as a net amount on the Statement of Changes in Cash Balance rather than disclosed as gross amounts for receipts and disbursements of cash related to direct loans and loan guarantees. OMB Bulletin 01-09, Form and Content of Agency Financial Statements, states that outlays in federal agencies’ SBRs should agree with each agency’s net outlays reported in the budget of the U.S. government. In addition, SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires explanation of any material differences between the information required to be disclosed (including net outlays) and the amounts described as “actual” in the budget of the U.S. government. As part of our fiscal year 2003 audit of the Statement of Changes in Cash Balance, we found material differences between the net outlay records used by Treasury to prepare the Statement of Changes in Cash Balance and the total net outlays reported in selected federal agencies’ audited SBRs. These differences totaled about $140 billion and $186 billion for fiscal years 2003 and 2002, respectively. Two agencies—Treasury and the Department of Health and Human Services (HHS)— accounted for about 83 percent and 75 percent of the differences identified in fiscal years 2003 and 2002, respectively. We found that the major cause of the differences for the two agencies was the treatment of offsetting receipts. Some offsetting receipts for these two agencies had not been included in the agencies’ SBRs, which would have reduced the agencies’ net outlays and made the amounts more consistent with Treasury records used to prepare the Statement of Changes in Cash Balance. We found that Treasury publishes offsetting receipts by agency or department monthly, including fiscal year-to-date information in the Monthly Treasury Statement. Nevertheless, material differences between the two agencies’ and Treasury’s records remained at the end of the fiscal year. For example, we found that HHS reported net outlays for fiscal year 2003 as $596 billion on its audited SBR, while the records that Treasury used to prepare the fiscal year 2003 Statement of Changes in Cash Balance showed net outlays of $505 billion for HHS. Until the differences between the total net outlays reported in the federal agencies’ SBRs and the records used to prepare the Statement of Changes in Cash Balance are reconciled, the effect of these differences on the CFS will be unknown. OMB has stated that it plans to work with the agencies to address this issue. Recommendations for Executive Action. We recommend that the Director of OMB direct the Controller of OMB, in coordination with Treasury’s Fiscal Assistant Secretary, to work with the federal agencies so that the differences between net outlays the agencies report in their SBRs and the net outlay records Treasury uses to prepare the Statement of Changes in Cash Balance are reconciled. In addition, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to determine and address the effects that any of the differences between net outlays the agencies report in their SBRs and Treasury’s net outlay records may have on the CFS. The Statement of Changes in Cash Balance reported only the changes in the “operating” cash of the U.S. government of $35 billion rather than the changes in all cash reported on the U.S. government’s Balance Sheet of $62.2 billion, as of September 30, 2003. We also found that the total operating cash amount reported in the Statement of Changes in Cash Balance did not link to the underlying agencies’ operating cash reported in their financial statements. For example, Treasury reported $51 billion of operating cash in Treasury’s own fiscal year 2003 audited financial statements. This amount, by itself, exceeded the $35 billion operating cash balance reported in the Statement of Changes in Cash Balance. SFFAS No. 1, Accounting for Selected Assets and Liabilities, defines nonentity cash as cash that a federal entity collects and holds on behalf of the U.S. government or other entities. In some circumstances, the entity deposits the cash in its accounts in a fiduciary capacity for Treasury or other entities. Several provisions of SFFAS No. 24, Selected Standards for the Consolidated Financial Report of the United States Government, require the Statement of Changes in Cash Balance to explain changes in the U.S. government’s cash balance. Recommendation for Executive Action. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop a process that will allow full reporting of the changes in cash balance of the U.S. government. Specifically, the process should provide for reporting on the change in cash reported on the consolidated Balance Sheet, which should be linked to cash balances reported in federal agencies’ audited financial statements. We found that the major program activities of the U.S. government relating to direct and guaranteed loans extended to the public were reported as a net amount on the Statement of Changes in Cash Balance rather than disclosed as gross amounts for receipts and disbursements of cash related to direct loans and loan guarantees. In this regard, the illustrative financial statement for the Statement of Changes in Cash Balance provided in SFFAS No. 24, while not prescriptive, shows gross reporting of direct loans and guarantees activities. In addition, gross reporting is consistent with the reporting advocated in Financial Accounting Standards Board Statement No. 95, Statement of Cash Flows. Treasury does not have a process for obtaining receipt and disbursement amounts for direct and guaranteed loans. As a result, the Statement of Changes in Cash Balance does not show the magnitude of these major government loan programs. Net reporting of direct and guaranteed loan program activity does not disclose how much cash the government disbursed to promote the nation’s welfare by making these loans available to the general population or how much in related repayments the government received. For example, in fiscal year 2003, the Statement of Changes in Cash Balance reported a net $1.2 billion of direct loan activity, while the Department of Education alone disbursed approximately $18 billion in direct loans to eligible borrowers and received approximately $15 billion in loan repayments. Recommendation for Executive Action. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to report gross amounts for receipts and disbursements of cash related to direct loans and loan guarantees. We found that the CFS did not report criminal debt, as determined through the U.S. Courts, in accordance with GAAP. SFFAS No. 1, Accounting for Selected Assets and Liabilities, and SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, require that a receivable and related revenue be recognized once amounts due to the U.S. government are assessed. Further, these standards require that an allowance for uncollectible accounts be used to reduce the gross amount of the receivable and revenue to its net realizable value. Also, in accordance with OMB Circular No. A- 129, Policies for Federal Credit Programs and Non-Tax Receivables, agencies are to (1) service and collect debts in a manner that best protects the value of the U.S. government’s assets and (2) provide accounting and management information for effective stewardship, including resources entrusted to the U.S. government (e.g., for nonfederal and federal restitution). Criminal debt consists primarily of fines and restitution related to a wide range of criminal activities, including domestic and international terrorism, drug trafficking, firearms activities, and white-collar fraud. The U.S. Courts assess these debts, and the Department of Justice’s (Justice) U.S. Attorneys’ Offices throughout the country are charged with enforcing collection. Although Justice and the U.S. Courts develop unaudited annual statistical data for informational purposes, neither entity is accounting for any of these criminal debts as receivables, disclosing the debts in financial statements, or having the information subject to audit. The U.S. Courts, which serve as the assessor, depositor, and disburser of most of the funds collected, are not required to prepare financial statements or disclose criminal debt information. In addition, Justice, which enforces criminal debt collection, prepares audited financial statements but does not record or disclose receivables for criminal debt. Therefore, criminal debt outstanding is not being reported to Treasury for inclusion in the CFS. Financial statement reporting of criminal debt would increase oversight of the debt collection process because amounts would be subject to audit. Such audits would include assessments of internal control and compliance with applicable laws and regulations related to the criminal debt collection process. In our recently issued report on criminal debt, we reemphasized the need for Justice, the Administrative Office of the U.S. Courts, OMB, and Treasury to form a joint task force to develop a strategic plan that addresses managing, accounting for, and reporting criminal debt. We stated that the strategy should include (1) determining an approach for assessing the collectibility of outstanding criminal debt amounts so that a meaningful allowance for uncollectible criminal debts can be reported and used for measuring debt collection performance and (2) having OMB work with Justice and certain other executive branch agencies to ensure that these entities report and/or disclose relevant criminal debt information in their financial statements and subject such information to audit. As of the completion of our fieldwork, the task force had not yet been established and, therefore, a strategic plan had not been developed. Recommendations for Executive Action. In the interim, until the joint task force is established and a strategic plan is developed, we recommend that the Director of OMB direct the Controller of OMB, in coordination with the Fiscal Assistant Secretary of the Treasury, to work with Justice and certain other executive branch agencies to ensure that these agencies report or disclose relevant criminal debt information in conformity with GAAP in their financial statements and have such information subjected to audit. In addition, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. As we have reported in previous years’ audits, the U.S. government has not been able to determine whether loss contingencies were complete and properly reported in the CFS. Part of the problem is that Treasury has not requested all relevant information for loss contingencies required under the accounting standards from all applicable federal agencies. For fiscal year 2003, Treasury’s primary means of compiling information for the CFS was through its system called Federal Agencies’ Centralized Trial Balance System (FACTS). Under FACTS, federal agencies were instructed to enter information for legal contingencies that are assessed as both “reasonably possible” and “estimable.” Treasury does not specifically request other information for loss contingencies that is required by accounting standards, such as loss contingencies assessed (1) to be probable, (2) as reasonably possible with estimated loss ranges, or (3) as uncertain. For example, one federal agency provided Treasury with information regarding a legal claim amount of $1.7 billion for which the agency’s lawyers were unable to provide an assessment of the likelihood of an unfavorable outcome. Because FACTS does not allow for narrative descriptions of amounts provided to Treasury and only classifies loss contingencies as reasonably possible and estimable, the agency was unable to properly report to Treasury that the assessment of the likelihood of an unfavorable outcome was uncertain. Consequently, Treasury incorrectly considered this amount as reasonably possible and estimable and therefore overstated its estimated possible losses for legal contingencies in the CFS by this federal agency’s claimant amount of $1.7 billion. We notified Treasury of this error and a correction was made in the final version of the CFS. SFFAS No. 5, Accounting for Liabilities of the Federal Government, as amended by SFFAS No. 12, Recognition of Contingent Liabilities Arising from Litigation: An Amendment of SFFAS No. 5, contains accounting and reporting standards for loss contingencies, including those arising from litigation, claims, and assessments. A contingency is defined as an existing condition, situation, or set of circumstances involving uncertainty as to possible gain or loss to an entity. The uncertainty will ultimately be resolved when one or more future events occur or fail to occur. When a loss contingency exists, the likelihood that the future event or events will confirm the loss or impairment of an asset or the incurrence of a liability can range from probable to remote. SFFAS Nos. 5 and 12 use the terms probable, reasonably possible, and remote to identify three areas within the range of potential loss, as follows: Probable. For contingencies, the future event or events are more likely than not to occur. In addition, for contingencies related to pending or threatened litigation and unasserted claims, the future confirming event or events are those likely to occur. Reasonably possible. The chance of the future confirming event or events occurring is more than remote but less than probable. Remote. The chance of the future event or events occurring is slight. Under SFFAS Nos. 5 and 12, a liability and the related cost for an estimated loss from a loss contingency should be recognized (accrued by a charge to income) when (1) a past event or exchange transaction has occurred, (2) a future outflow or other sacrifice of resources is probable, and (3) the future outflow or sacrifice of resources is measurable. Disclosure of the nature of an accrued liability for loss contingencies, including the amount accrued, may be necessary for the financial statements not to be misleading. For example, if the amount recognized is large or unusual, disclosure should be considered. However, if no accrual is made for a loss because one or more of the conditions in SFFAS No. 12 are not met, disclosure of the contingency should be made when there is at least a reasonable possibility that a loss has been incurred. The disclosure should include the nature of the contingency and an estimate of the possible liability or range of possible liability, if estimable, or a statement that such an estimate cannot be made. Recommendation for Executive Action. Because the limited information requested through Treasury’s FACTS does not capture all the disclosure requirements under the accounting standards, the contingency note disclosure for the CFS may have been inaccurate and unreliable. For fiscal year 2004, Treasury is completing the design of and will be implementing a new system for compiling the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to include in the new system a request for federal agencies to provide the following contingency loss information to assist Treasury in disclosing contingencies in the CFS in accordance with GAAP: contingency losses assessed as probable and for which possible losses and estimated loss ranges are measurable, contingency losses assessed as probable and for which possible losses cannot be estimated, contingency losses assessed as reasonably possible and for which losses and estimated loss ranges are measurable, contingency losses assessed as reasonably possible and for which possible losses are not measurable, and the nature and extent of significant contingency losses for which the agency is unable to provide an assessment on the likelihood of an unfavorable outcome. As we have reported in the past, Treasury’s current process for compiling the CFS did not directly link information from federal agencies’ audited financial statements to amounts reported in the CFS, and therefore Treasury could not fully ensure that the information in the CFS was consistent with the underlying information in federal agencies’ audited financial statements and other financial data. For fiscal year 2004 reporting, Treasury is planning a new process to compile the CFS. We reviewed Treasury’s plans for the new process and found that there is a plan to link most of the agencies’ audited financial statements to the consolidated financial statements through the use of a new closing package. Treasury will require each significant agency to prepare the closing package and to certify its accuracy. However, we found that the planned closing package does not require federal agencies to directly link their audited financial statement notes to the closing package notes. Treasury plans to rely on note templates it designed that call for predefined information from the federal agencies. We found that these templates are too restrictive and that important information reported at the agency level may not be included in the CFS because it is not specifically called for in the closing package. The use of such predefined templates increases the risk that Treasury will continue to produce consolidated financial statements that are not in conformity with GAAP. We also found that the planned closing package does not require the necessary information to compile all five of the required consolidated financial statements. For example, as noted earlier, we found that there were significant differences between the total net outlays reported in selected agencies’ audited financial statements and the records Treasury uses to prepare its Statement of Changes in Cash Balance from Unified Budget and Other Activities. Because the planned closing package does not call for agencies to provide information to compile this statement that is consistent with underlying information in the agencies’ audited financial statements, the risk of differences between the CFS and the underlying agency financial statements is increased. The lack of direct linkage also affects the efficiency and effectiveness of the audit of the CFS. Statement of Federal Financial Accounting Concepts No. 4, Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government, states that the consolidated financial report should be a general purpose report that is aggregated from agency reports and that it should tell users where to find information in other formats, both aggregated and disaggregated, such as in individual agency reports, on agency Web sites, and in the President’s Budget. Recommendations for Executive Action. As Treasury is still designing its new compilation process, which it expects to implement beginning with the fiscal year 2004 CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to modify Treasury’s plans for the new closing package to require federal agencies to directly link their audited financial statement notes to the CFS notes and provide the necessary information to demonstrate that all of the five principal consolidated financial statements are consistent with the underlying information in federal agencies’ audited financial statements and other financial data. According to SFFAS No. 21, Reporting Corrections of Errors and Changes in Accounting Principles, Amending SFFAS 7, Accounting for Revenue and Other Financing Sources, an entity should restate the prior year to report correction of errors that are material and should disclose the nature of the prior period adjustments. If errors are not material, they should be included in the current year results and not cited as prior period adjustments on the Statement of Operations and Changes in Net Position, and no disclosure is required. Also, according to SFFAS No. 21, an entity should adjust the beginning balance of cumulative results of operations for changes in accounting principles and disclose the nature of those changes. Treasury did not fully comply with the requirements of SFFAS No. 21 in connection with certain identified errors relating to prior periods. Treasury did not restate the prior year to correct net errors of $2.6 billion because it determined the errors to be immaterial, which was the correct accounting treatment. However, Treasury reported the $2.6 billion amount as a prior period adjustment on the Statement of Operations and Changes in Net Position and adjusted the beginning balance of cumulative results of operations as would be required if these amounts were material. Therefore, Treasury was inconsistent when implementing the requirements of SFFAS No. 21. Treasury also did not initially comply with the requirements of SFFAS No. 21 in connection with reporting a change in accounting principle. Treasury reported in several drafts of the CFS a change in accounting principle of $383 billion as an error relating to prior periods because Treasury did not specifically require federal agencies to separately identify changes in accounting principles. Instead, Treasury allowed federal agencies to report changes in accounting principles together with prior period adjustments, which made them difficult to differentiate. Changes in accounting principles are not errors and have different reporting requirements. We brought this to Treasury’s attention and it corrected the mistake in the final version of the CFS. Recommendations for Executive Action. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to report prior period adjustments in accordance with SFFAS No. 21 by (1) restating the prior year for corrections of material errors and adjusting the beginning balance of cumulative results of operations and disclosing the nature of the errors in the notes to the CFS and (2) including corrections of immaterial errors in the current year and not citing them as prior period adjustments on the Statement of Changes in Net Position and not disclosing them in the notes to the CFS and include in Treasury’s new closing package a process that will allow federal agencies to clearly distinguish between prior period adjustments and changes in accounting principles in accordance with SFFAS No. 21. As we reported as part of our fiscal year 2002 audit, and found again during our fiscal year 2003 audit, Treasury lacks an adequate process to ensure that the financial statements, related notes, stewardship, and supplemental information in the CFS are presented in conformity with GAAP. SFFAS No. 24 states that the Federal Accounting Standards Advisory Board (FASAB) standards apply to all federal agencies, including the U.S. government as a whole, unless provision is made for different accounting treatment in a current or subsequent standard. Specifically, we found that Treasury did not (1) timely identify applicable GAAP requirements; (2) make timely modifications to agency data calls to obtain information needed; (3) assess, qualitatively and quantitatively, the materiality of omitted disclosures; or (4) document decisions reached with regard to omitted disclosures and the rationale for such decisions. During our fiscal year 2002 audit, we identified 16 disclosure areas consisting of 86 specific disclosures that may not have been in conformity with applicable standards. During our fiscal year 2003 audit, we found 4 disclosure areas involving an additional 11 specific disclosures that may not have been in conformity with applicable standards. As a result of this and certain other weaknesses we identified, we were unable to determine if the missing information was material to the CFS. These additional required disclosures are described in appendix I. We did note that Treasury is requesting certain information in its planned closing package for fiscal year 2004 that may address some of the needed disclosures. Recommendations for Executive Action. We reaffirm our recommendation that the Secretary of the Treasury direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, stewardship information, and supplemental information in the CFS to be presented in conformity with GAAP, in all material respects. The process should timely identify GAAP requirements; make timely modifications to Treasury’s closing package requirements to obtain information needed; assess, qualitatively and quantitatively, the impact of any omitted document decisions reached and the rationale for such decisions. With respect to the 11 required disclosures identified in appendix I for which information was either not included in the CFS or was presented in a way that did not meet GAAP standards, we recommend that each of these disclosures be included in the CFS or that the specific rationale for excluding any of them be documented. OMB and Treasury provided written comments on a draft of this report; these comments are reprinted in appendixes III and IV, respectively. OMB stated that it generally concurred with the findings in the report and would work with Treasury and other executive departments and agencies to address these findings. Treasury stated that our report identified issues regarding certain federal financial reporting procedures and internal controls and provided valuable advice and recommendations for improvements. It also stated that many of the concerns we raised are in critical areas where federal financial reporting can be improved. While Treasury stated that it generally agreed with our concerns on most of the major issues, in some cases it disagreed with either our finding or our recommended approach to addressing the problem. We continue to believe that our findings and recommendations are sound. Treasury’s disagreements involve two areas of weaknesses we identified and reported on as part of our fiscal year 2003 audit and are discussed in this report (1) Statement of Changes in Cash Balance from Unified Budget and Other Activities, and (2) Treasury’s allocation methodology for certain costs in the Statement of Net Cost. In addition, Treasury disagreed with certain matters involving three areas we identified and reported on as part of our fiscal year 2002 audit (1) unreconciled transactions affecting the change in net position, (2) Reconciliation of Net Operating Cost and Unified Budget Surplus/Deficit, and (3) management representation letters. We will address each of Treasury’s points relating to these five areas, beginning with the two related to this report. Treasury expressed disagreement with certain issues we identified with the Statement of Changes in Cash Balance. Treasury disagreed with our position that it should determine and address the effects on the accuracy of the CFS of differences between net outlays the federal agencies report in their individual audited SBRs and Treasury’s net outlay records used to prepare the Statement of Changes in Cash Balance. As stated in this report, OMB and GAAP require federal agencies to report net outlays in their SBRs. The Statement of Changes in Cash Balance also reports actual unified budget outlays. Both are intended to represent the same amount and be consistent with the information in the budget of the U.S. government. We found material differences between these amounts for selected federal agencies for fiscal year 2003. Until these types of significant differences are reconciled, the effect on the CFS will be unknown. OMB has stated that it has begun working with the federal agencies to address this issue and we continue to believe that Treasury, in coordination with OMB, should work with the federal agencies on this matter as well. Treasury also stated that it believes it is not required to report both budget receipts and budget outlays in the Statement of Changes in Cash Balance but only the budget deficit or surplus, as required by SFFAS No. 24. We understand that SFFAS No. 24 calls for a financial statement that explains how the annual budget surplus or deficit relates to the change in the government’s cash, and does not prescribe the individual reporting of budget receipts and outlays. However, the budget deficit or surplus is the simple calculation of netting the budget receipt and outlay amounts. Also, Treasury does not maintain “budget deficit or surplus” records; rather Treasury maintains separate budget receipt and outlay records and relies on these records to calculate the budget deficit or surplus. As such, regardless of whether Treasury continues to separately report budget receipts and budget outlays or elects to only report the budget deficit or surplus, Treasury and OMB will still need to determine the effects of the types of net outlay differences described above on the CFS. While Treasury agreed that the illustrative statement for the Statement of Changes in Cash Balance provided in SFFAS No. 24 shows total cash and the gross amounts for receipts and disbursements of cash related to direct loans and loan guarantees, it stated that presentation of this amount of detail is not required. As such, Treasury states that, at this time, it will not report the gross amounts for receipts and disbursements of cash related to direct loans and loan guarantees as we recommend. As stated in this report, we recognize that the illustrative statement is not prescriptive. However, we also note that the gross reporting is consistent with the reporting encouraged in Financial Accounting Standards Board Statement No. 95, Statement of Cash Flows. We also stated in this report that net reporting of direct and guaranteed loan program activity does not disclose how much cash the government disbursed to promote the nation’s welfare by making these loans available to the general population or how much in related repayments the government received. Therefore, we continue to believe that gross reporting of this information is more meaningful and useful to a reader of the CFS. In its comments on a draft of this report, Treasury implied that we disagreed with Treasury for amending its methodology for allocating OPM costs in the Statement of Net Cost to reflect a new law mandating fully funded pension cost recognition at the U.S. Postal Service (USPS). We did not take issue with Treasury modifying its methodology for the change, but rather that Treasury had not updated its written procedures to reflect the modification and had made errors in applying the methodology. Specifically, as stated in our report, we found that Treasury did not update its methodology in its written procedures for allocating OPM costs to reflect the change caused by the USPS pension cost recognition and DHS’ partial year existence. Our review found that Treasury did modify its methodology for allocating OPM costs based on the changes caused by USPS; however, it was not documented in its standard operating procedures and the spreadsheet used to apply the methodology had several significant errors—none of which were identified by Treasury. One significant error was that the FTEs used by Treasury for some agencies did not agree with the respective agencies’ FTEs in the Analytical Perspectives, Budget of the United States Government as prescribed by Treasury’s methodology. As such, we continue to recommend that Treasury (1) ensure that, if FTEs are used as part of Treasury’s methodology for allocating OPM costs, the FTEs used for the agencies listed on the Statement of Net Cost agree with the FTEs listed in the Analytical Perspectives, Budget of the United States Government as currently stated in Treasury’s methodology; (2) document any changes to the stated methodology for allocating OPM costs and the rationale for these changes; and (3) require reviews by Treasury management of the accuracy of the allocated OPM costs. Treasury stated that it agreed that reconciling net position is a problem and that eliminations of intragovernmental activity and balances are not performed through balanced accounting entries but expressed concern that we are over emphasizing the elimination process. Treasury also stated that it agrees that increasing the granularity of the eliminations will help Treasury focus on where the problem exists as we reported as part of our fiscal year 2002 audit. We are not unduly emphasizing the elimination process. Our focus is on Treasury to identify and quantify all components of the activity in the net position line item and reconcile the change in the U.S. government’s net position from year to year. During our fiscal year 2002 audit, we recommended that Treasury develop reconciliation procedures that will aid in understanding and controlling the net position balance, including the need to understand the components, including intragovernmental transactions, that are presently causing the net unreconciled transactions. These actions would allow the use of balanced accounting entries to account for the change in net position rather than simple subtraction of liabilities from assets and should narrow the amount of unexplained differences that comprise the net unreconciled transactions. Treasury added that it has a new process that will involve (1) use of reciprocal categories in performing eliminations and (2) a net position tracking methodology that will identify both the nature and source of the unreconciled transactions “plug” by financial area and by agency. We will evaluate this new process as part of the fiscal year 2004 audit. Treasury stated that it does not agree with the recommendation in our report on the fiscal year 2002 audit that Treasury report “net unreconciled transactions” included in the net operating results line item as a separate reconciling activity in the Reconciliation Statement because it does not know whether it belongs in the statement. The Reconciliation Statement begins with the net operating cost amount reported in the Statement of Operations and Changes in Net Position. The fiscal year 2003 amount includes a net $24.5 billion labeled as “unreconciled transactions,” which was needed to balance the consolidated financial statements. The Reconciliation Statement ends with the budget deficit amount, and is intended to show key reconciling items between the two amounts. For fiscal year 2003, Treasury included this $24.5 billion net unreconciled transactions balance as part of the net operating cost, which indicated that this amount is attributable to fiscal year 2003 activity. We maintain that the $24.5 billion should have been included as a reconciling item in the Reconciliation Statement because the fiscal year 2003 budget deficit, the amount being reconciled to, did not include this $24.5 billion amount. While Treasury agreed that it could always improve its Reconciliation Statement, Treasury stated that it took exception to our finding that the amounts identified as changes in the balance sheet items are incorrect. We did not report such a finding. Instead, as part of the fiscal year 2002 audit, we reported that Treasury’s process for preparing the Reconciliation Statement did not ensure completeness of reporting or ascertain the consistency of all the amounts reported in the Reconciliation Statement with the related balance sheet line items, related notes, or federal agencies’ financial statements. We stated that we performed an analysis to determine whether all applicable components reported in the other statements (and related note disclosures) included in the CFS were properly reflected in the Reconciliation Statement. For the fiscal year 2002 audit, we found about $21 billion of net changes in various line item account balances on the balance sheet between fiscal year 2002 and 2001 that were not explained on either the Reconciliation Statement or the Statement of Changes in Cash Balance. For example, the Reconciliation Statement reported annual depreciation expense ($20.5 billion) and total capitalized fixed assets ($40.9 billion) as the components of the net change in property, plant, and equipment from fiscal year 2001. Although these activities accounted for a net increase of $20.4 billion, the balance sheet reflected a smaller net increase, $18 billion; Treasury was unable to explain the remaining $2.4 billion of the net change. Treasury stated that our preference for more detail flow information in the statements is not something that it plans to do. We did not state this as a preference. Instead, as part of our fiscal year 2002 audit, we reported that Treasury did not establish a reporting materiality threshold for purposes of collecting and reporting information in the Reconciliation Statement. For example, some items were reported simply as a net “increase/decrease” without considering how material, both quantitatively and qualitatively, the gross changes were. Treasury was unable to demonstrate whether material, informative amounts were netted, and pertinent information may therefore not be disclosed. Treasury disagreed with several of the statements related to management representation letters that we made in our report on the fiscal year 2002 audit. Based on Treasury’s comments, it appears that it misunderstood our primary point which is that without performing an adequate review and analysis of federal agencies’ management representation letters, Treasury and OMB management may not be fully informed of matters that may affect their representations made with respect to the audit of the CFS. For each agency financial statement audit, generally accepted government auditing standards require that agency auditors obtain written representations from agency management as part of the audit. In turn, Treasury and OMB are to receive all the required management representation letters and the related summaries of unadjusted misstatements from the federal agencies. This is important because generally accepted government auditing standards require Treasury and OMB to provide us, as their auditor, a management representation letter for the CFS. To prepare their representations on the CFS, Treasury and OMB rely on the information within agencies’ management representation letters. However, we found that Treasury and OMB did not have policies or procedures to adequately review and analyze federal agencies’ management representation letters. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the Subcommittee on Financial Management, the Budget, and International Security, Senate Committee on Governmental Affairs; the House Committee on Government Reform; and the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury and the Controller of OMB. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, on (202) 512- 2600 or Gary T. Engel, Director, Financial Management and Assurance, on (202) 512-3406. U.S. generally accepted accounting principles (GAAP) require the 11 disclosures described below to be included in the consolidated financial statements (CFS) or, if they are excluded, that the specific rationale for their exclusion be documented. However, the Department of the Treasury (Treasury) neither included nor documented the exclusion of these disclosures. The note disclosure for federal employee and veteran benefits payable departed from the following disclosure requirements of Statements of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government. SFFAS No. 5, paragraph 65, states that actuarial assumptions should be on the basis of the actual experience of the covered group, to the extent that credible experience data are available, but should emphasize expected long-term future trends rather than give undue weight to recent experience. However, the fiscal year 2003 military rates of inflation and projected salary increases included in the CFS were the actual fiscal year 2003 rates disclosed in the Department of Defense’s audited financial statements rather than the long-term rates. For other retirement benefits, SFFAS No. 5, paragraph 83, states that the entity should disclose the assumptions used. However, assumptions were not shown for the liability for veterans’ compensation and burial benefits. According to SFFAS No. 5, paragraph 72, the entity should report a pension expense for the net of the following components: normal costs; interest on the pension liability during the period; prior (and past) service cost from plan amendments (or the initiation of a new plan) during the period, if any; and actuarial gains and losses during the period, if any. The individual components should be disclosed. However, the CFS did not disclose prior service costs from plan amendments as a separate component. According to SFFAS No. 5, paragraph 88, the entity should report an other retirement benefits expense for the net of the following components: normal cost; interest on the other retirement benefits liability during the period; prior (and past) service costs from plan amendments (or the initiation of a new plan) during the period, if any; any gains or losses due to a change in the medical inflation rate assumption; and other actuarial gains or losses during the period, if any. The individual components should be disclosed. However, the CFS did not disclose any gains or losses due to a change in the medical inflation rate assumption for health benefits as a separate component. The CFS note disclosure for environmental and disposal liabilities departed from the requirements of paragraphs 108, 109, and 111 of SFFAS No. 6, Accounting for Property, Plant, and Equipment, in the following ways: The CFS does not disclose the method for assigning estimated total cleanup costs to current operating periods (i.e., physical capacity versus passage of time). For cleanup costs associated with general property, plant, and equipment (PP&E), the CFS does not disclose the unrecognized portion of estimated total cleanup costs. The CFS does not describe the nature of estimates and the disclosure of information regarding possible changes to the estimates resulting from inflation, deflation, technology, or applicable laws and regulations. In addition, Treasury should consider whether the reader would be interested in understanding why the environmental and disposal liabilities amount significantly changed during the year and include the explanation for the change in the note disclosure. The information in stewardship information for research and development departed from the disclosure requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 99, in the following ways: Information on the program outcomes (i.e., program outcome data or output data) for the investments in research and development are not properly reported. Outcome data are expected to consist typically of a narrative discussion of the major results achieved by the program along the lines of basic research, applied research, and development—as defined in the standard. If outcome data are not available (for example, the agency has not agreed on outcome measures for the program, the agency is unable to collect reliable outcome data, or the outcomes will not occur for several years), the outputs that best provide indications of the intended program outcomes shall be used to justify continued treatment of expenses as investments until outcome data are available. The CFS does not include a narrative description of the major results achieved through the investments in basic research, applied research, and development. The required supplemental information for deferred maintenance departed from the disclosure requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraph 83, by not disclosing the identification of each major class of asset (i.e., building and structures, furniture and fixtures, equipment, vehicles, and land) for which maintenance has been deferred. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in connection with Treasury's current compilation process and the development of Treasury's new compilation system and process, to segregate the duties of individuals who have the capability to enter, change, and delete data within the Federal Agencies' Centralized Trial Balance System and the Hyperion database and post adjustments to the consolidated financial statements (CFS). Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in connection with Treasury's current compilation process and the development of Treasury's new compilation system and process, to develop and fully document policies and procedures for the CFS preparation process so that they are proper, complete, and consistently applied by staff members. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in connection with Treasury's current compilation process and the development of Treasury's new compilation system and process, to require and document reviews by management of all procedures that result in data changes to the CFS. Closed. Management reviews were implemented in fiscal year 2003 under the current compilation environment. GAO will review management reviews in the new compilation environment. As Treasury is designing its new financial statement compilation process to begin with the fiscal year 2004 CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of the Office of Management and Budget (OMB), to develop reconciliation procedures that will aid in understanding and controlling the net position balance as well as eliminate the plugs previously associated with compiling the CFS. Open. As Treasury is designing its new financial statement compilation process to begin with the fiscal year 2004 CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to use balanced accounting entries to account for the change in net position rather than simple subtraction of liabilities from assets. Open. As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to develop policies and procedures that document how OMB will enforce the business rules provided in OMB Memorandum M-03-01, Business Rules for Intragovernmental Transactions. Open. As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to require that significant differences noted between business partners be resolved and the resolution be documented. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to implement the plan to require federal agencies to report in Treasury's new closing package, beginning with fiscal year 2004, intragovernmental activity and balances by trading partner and to indicate amounts that have not been reconciled with trading partners and amounts, if any, that are in dispute. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to design procedures that will account for the difference in intragovernmental assets and liabilities throughout the compilation process by means of formal consolidating and elimination accounting entries. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop solutions for intragovernmental activity and balance issues relating to federal agencies' accounting, reconciling, and reporting in areas other than those OMB now requires be reconciled, primarily areas relating to appropriations. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to reconcile the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year. Examples of these differences would include capitalized purchases, such as inventory or equipment, and deferred revenue. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should report "net unreconciled differences" included in the net operating results line item as a separate reconciling activity in the reconciliation statement. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should develop policies and procedures to ensure completeness of reporting and document how all the applicable components reported in the other consolidated financial statements (and related note disclosures included in the CFS) were properly reflected in the reconciliation statement. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should establish reporting materiality thresholds for determining which agency financial statement activities to collect and report at the governmentwide level to assist in ensuring that the reconciliation statement is useful and conveys meaningful information. Open. If Treasury chooses to continue using information from both federal agencies' financial statements and the Central Accounting and Reporting System (STAR), Treasury should demonstrate how the amounts from STAR reconcile to federal agencies' financial statements. Open. If Treasury chooses to continue using information from both federal agencies' financial statements and from STAR, Treasury should identify and document the cause of any significant differences, if any are noted. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies' audited financial statements. Treasury should document the consistency of the significant line items on this statement to agencies' audited financial statements. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies' audited financial statements. Treasury should request, through its closing package, that federal agencies provide the net outlays reported in their Combined Statement of Budgetary Resources and explanations for any significant differences between net outlay amounts reported in the Combined Statement of Budgetary Resources and the budget of the U.S. government. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies' audited financial statements. Treasury should investigate the differences between net outlays reported in federal agencies' Combined Statement of Budgetary Resources and Treasury's records in STAR to ensure that the proper amounts are reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies' audited financial statements. Treasury should explain and document the differences between the operating revenue amount reported on the Statement of Operations and Changes in Net Position and unified budget receipts reported on the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies' audited financial statements. Treasury should provide support for how the line items in the "other activities" section of this statement relate to either the underlying Balance Sheet or related notes accompanying the CFS. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board. Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included, but is not included, is immaterial. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity's assets, liabilities, and revenues. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, and stewardship and supplemental information in the CFS to be presented in conformity with U.S. generally accepted accounting principles (GAAP). The process should timely identify GAAP requirements. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, and stewardship and supplemental information in the CFS to be presented in conformity with GAAP. The process should make timely modifications to Treasury's closing package requirements to obtain information needed. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, and stewardship and supplemental information in the CFS to be presented in conformity with GAAP. The process should assess, qualitatively and quantitatively, the impact of the omitted disclosures. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, and stewardship and supplemental information in the CFS to be presented in conformity with GAAP. The process should document decisions reached and the rationale for such decisions. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an analysis of the agency management representations to determine if discrepancies exist between what the agency auditor reported and the representations made by the agency, including the resolution of such discrepancies. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require a determination that the agency management representation letters have been signed by the highest-level agency officials who are responsible for and knowledgeable about the matters included in the agency management representation letters. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an assessment of the materiality thresholds used by federal agencies in their respective management representation letters. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an assessment of the impact, if any, of federal agencies' materiality thresholds on the management representations made at the governmentwide level. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an evaluation and assessment of the omission of representations ordinarily included in agency management representation letters. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an analysis and aggregation of the agencies' summary of unadjusted misstatements to determine the completeness of the summaries and to ascertain the materiality, both individually and in the aggregate, of such unadjusted misstatements to the CFS taken as a whole. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that agencies provide adequate information in their legal representation letters regarding the expected outcome of the cases. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that agencies provide related management schedules. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department's Treaties in Force). Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. Open. As Treasury is designing its new compilation process, which it expects to implement beginning with the fiscal year 2004 CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to design the new compilation process to directly link information from federal agencies' audited financial statements to amounts reported in all the applicable CFS and related footnotes. Open. As Treasury is designing its new compilation process, which it expects to implement beginning with the fiscal year 2004 CFS, the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to consider the other applicable recommendations in this report when designing and implementing the new compilation process. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of Statement of Federal Financial Accounting Standards (SFFAS) No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the valuation basis for foreclosed property. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the changes from the prior year's accounting methods, if any. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the restrictions on the use/disposal of property. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the balances by categories (i.e., pre-1992 and post-1991 foreclosed property). Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the number of properties held and average holding period by type or category. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 91, which requires the reporting entity to disclose the number of properties for which foreclosure proceedings are in process at the end of the period for foreclosed assets acquired in full or partial settlement of a direct or guaranteed loan. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 9, which requires credit programs to reestimate the subsidy cost allowance for outstanding direct loans and the liability for outstanding loan guarantees. There are two kinds of reestimates: (1) interest rate reestimates and (2) technical/default reestimates. Entities should measure and disclose each program's reestimates in these two components separately. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 10, which requires the reporting entity to display in the notes to the financial statements a reconciliation between the beginning and ending balances of the subsidy cost allowance for outstanding direct loans and the liability for outstanding loan guarantees reported on the entity's balance sheet. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 11, which requires disclosure of the total amount of direct or guaranteed loans disbursed for the current reporting year and the preceding reporting year. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 11, which requires disclosure of the subsidy expense by components, recognized for the direct or guaranteed loans disbursed in the current reporting year and the preceding reporting year. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 11, which requires disclosure of the subsidy reestimates by components for the current reporting year and the preceding reporting year. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 11, which requires disclosure, at the program level, of the subsidy rates for the total subsidy cost and its components for the interest subsidy costs, default costs (net of recoveries), fees and other collections, and other costs estimated for direct loans and loan guarantees in the current year's budget for the current year's cohorts. Open. The note disclosure for loans receivable and loan guarantee liabilities should meet the requirements of SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees, paragraph 11, which requires the reporting entity to disclose, discuss, and explain events and changes in economic conditions, other risk factors, legislation, credit policies, and subsidy estimation methodologies and assumptions that have had a significant and measurable effect on subsidy rates, subsidy expense, and subsidy reestimates. Open. The note disclosure for inventories and operating materials and supplies should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 30, which requires the difference between the carrying amount and the expected net realizable value to be recognized as a loss or gain and either separately reported or disclosed when inventory or operating materials and supplies are declared excess, obsolete, or unserviceable. Open. The note disclosure for inventories and operating materials and supplies should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraphs 35 and 50, that require disclosure of inventory and operating materials and supplies general composition. Open. The note disclosure for inventories and operating materials and supplies should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraphs 35 and 50, that require disclosure of any changes from the prior year in accounting methods for inventory and operating materials and supplies. Open. The note disclosure for inventories and operating materials and supplies should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraphs 35 and 50, which require the disclosure of any restrictions on the sale of inventory and the use of operating materials and supplies. Open. The note disclosure for inventories and operating materials and supplies should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraphs 35 and 50, which require disclosure of any changes in the criteria for categorizing inventory and operating materials and supplies. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of the basis for valuing stockpile material, including valuation method and any cost flow assumptions. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of any changes from the prior year's accounting methods. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of restrictions on the use of stockpile material. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of the balances in each category of stockpile material (i.e., stockpile material held and held for sale). Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of the criteria for grouping stockpile material held for sale. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 56, which requires disclosure of changes in criteria for categorizing stockpile material held for sale. Open. The note disclosure for stockpile material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 55, which requires disclosure of any difference between the carrying amount (i.e., purchase price or cost) of stockpile material held for sale and the estimated selling price of such assets. Open. The note disclosure for seized material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 66, which requires disclosure of the valuation method. Open. The note disclosure for seized material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 66, which requires disclosure of any changes from the prior year's accounting methods. Open. The note disclosure for seized material should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 66, which requires disclosure of the analysis of change in seized property (including dollar value and number of seized properties) that are on hand at the beginning of the year, seized during the year, disposed of during the year, and on hand at the end of the year, as well as known liens or other claims against the property. This information should be presented by type of seizure and method of disposition, when material. Open. The note disclosure for forfeited property should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 78, which requires disclosure of the valuation method. Open. The note disclosure for forfeited property should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 78, which requires disclosure of the analysis of the changes in forfeited property by type and dollar amount that includes (1) number of forfeitures on hand at the beginning of the year, (2) additions, (3) disposals and method of disposition, and (4) end-of-year-balances. Open. The note disclosure for forfeited property should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 78, which requires disclosure of any restriction on the use or disposition of the property. Open. The note disclosure for forfeited property should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 78, which requires disclosure, if available, of an estimate of the value of property to be distributed to other federal, state, and local agencies in future reporting periods. Open. The note disclosure for goods held under price support and stabilization programs should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 98, which requires that if a contingent loss is not recognized because it is less than probable or it is not reasonably measurable, disclosure of the contingency shall be made if it is at least reasonably possible that a loss may occur. Open. The note disclosure for goods held under price support and stabilization programs should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 109, which requires disclosure of the basis for valuing commodities, including valuation method and cost flow assumptions. Open. The note disclosure for goods held under price support and stabilization programs should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 109, which requires disclosure of any changes from the prior year's accounting methods. Open. The note disclosure for goods held under price support and stabilization programs should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 109, which requires disclosure of any restrictions on the use, disposal, or sale of commodities. Open. The note disclosure for goods held under price support and stabilization programs should meet the requirements of SFFAS No. 3, Accounting for Inventory and Related Property, paragraph 109, which requires disclosure of the analysis of the change in dollar amount and volume of commodities, including those (1) on hand at the beginning of the year, (2) acquired during the year, (3) disposed of during the year listed by method of disposition, (4) on hand at the end of the year, (5) on hand at year-end and estimated to be donated or transferred during the coming period, and (6) received as a result of surrender of collateral related to nonrecourse loans outstanding. The analysis should also show the dollar value and volume of purchase agreement commitments. Open. The note disclosure for property, plant, and equipment (PP&E) should meet the disclosure requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraph 45, which requires disclosure of the estimated useful lives for each major class of PP&E. Open. The note disclosure for PP&E should meet the disclosure requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraph 45, which requires disclosure of capitalization thresholds, including any changes in thresholds during the period. Open. Open. The note disclosure for PP&E should meet the disclosure requirements of SFFAS No. 10, Accounting for Internal Use Software, paragraph 35, which requires disclosure of the cost, associated amortization, and book value of internal use software. Closed. Fiscal year 2003 CFS footnote for PP&E disclosed the cost, associated amortization, and book value of internal use software. The note disclosure for PP&E should meet the disclosure requirements of SFFAS No. 10, Accounting for Internal Use Software, paragraph 35, which requires disclosure of the estimated useful life for each major class of software for internal use software. Open. The note disclosure for PP&E should meet the disclosure requirements of SFFAS No. 10, Accounting for Internal Use Software, paragraph 35, which requires disclosure of the method of amortization for internal use software. Open. The note disclosure for PP&E should meet the disclosure requirements of SFFAS No. 16, Amendments to Accounting for Property, Plant, and Equipment, paragraph 9, which requires an appropriate PP&E note disclosure to explain that "physical quantity" information for the multiuse heritage assets is included in supplemental stewardship reporting for heritage assets. Open. The note disclosure for federal employee and veteran benefits payable should be completely and properly reported, specifically, that (1) it include a line for the valuation of plan amendments that occurred during the year and (2) the liability for military pensions and note disclosure related to the "change in actuarial accrued pension liability and components of related expenses" agree with the information presented in the Department of Defense's financial statements. Open. The note disclosure for environmental and disposal liabilities should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, that require (1) estimation and recognition of cleanup costs associated with general PP&E at the time the PP&E is placed in service and (2) recognition of a liability for the portion of the estimated total cleanup cost attributable to that portion of the physical capacity used or that portion of the estimated useful life that has passed since the general PP&E was placed in service. Open. The note disclosure for environmental and disposal liabilities should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, that require inclusion of material changes in total estimated cleanup costs due to changes in laws, technology, or plans. Open. The note disclosure for capital leases should meet the requirements of Federal Accounting Standards Board (FASB), Statement of Financial Accounting Standards (SFAS) No. 13, Accounting for Leases, paragraph 16, which requires future minimum lease payments as of the date of the latest balance sheet presented, in the aggregate and for each of the 5 succeeding fiscal years, with separate deductions from the total for the amount representing executory costs, including any profit thereon, included in the minimum lease payments, and for the amount of the imputed interest necessary to reduce the net minimum lease payments to present value. Open. The note disclosure for capital leases should meet the requirements of FASB, SFAS No. 13, Accounting for Leases, paragraph 16, which requires a summary of assets under capital lease by major asset category and the related total accumulated amortization. Open. The note disclosure for capital leases should meet the requirements of FASB, SFAS No. 13, Accounting for Leases, paragraph 16, which requires a general description of the lessee's leasing arrangements, including but not limited to (1) the basis on which contingent rental payments are determined, (2) the existence and terms of renewal or purchase options and escalation clauses, and (3) restrictions imposed by lease agreements, such as those concerning dividends, additional debt, and further leasing. Open. The note disclosure for life insurance liabilities should meet the requirements of SFFAS No. 5, Accounting for Liabilities of the Federal Government, paragraph 117, which requires all federal reporting entities with whole life insurance programs to follow applicable standards as prescribed in the private sector standards when reporting the liability for future policy benefits: FASB SFAS No. 60, Accounting and Reporting by Insurance Enterprises; SFAS No. 97, Accounting and Reporting by Insurance Enterprises for Certain Long-Duration Contracts and for Realized Gains and Losses from the Sale of Investments; and SFAS No. 120, Accounting and Reporting by Mutual Life Insurance Enterprises and by Insurance Enterprises for Certain Long-Duration Participating Contracts; and American Institute of Certified Public Accountants Statement of Position 95-1, Accounting for Certain Insurance Activities of Mutual Life Insurance Enterprises. Open. The note disclosure for life insurance liabilities should meet the requirements of SFFAS No. 5, Accounting for Liabilities of the Federal Government, paragraph 5, which requires all components of the liability for future policy benefits (i.e., the net-level premium reserve for death and endowment policies and the liability for terminal dividends) to be separately disclosed in a footnote with a description of each amount and an explanation of its projected use and any other potential uses (e.g., reducing premiums, determining and declaring dividends available, and reducing federal support in the form of appropriations related to administrative cost or subsidies). Open. The note disclosure on major commitments and contingencies be consistent with disclosed information in individual agencies' financial statements. Open. The note disclosure on major commitments and contingencies disclose sufficient information (detailed discussion) regarding certain major commitments and contingencies. Open. The note disclosure for collections and refunds of federal revenue should meet the requirements of SFFAS No. 7, Concepts for Reconciling Budgetary and Financial Accounting, paragraph 64, which requires, among other things, that collecting entities disclose the basis of accounting when the application of the general rule results in a modified cash basis of accounting. Closed. The fiscal year 2003 CFS footnote for collections and refunds of federal revenue reflects that such information is accounted for using a modified cash basis of accounting. The note disclosure for collections and refunds of federal revenue should meet the requirements of SFFAS No. 7, Concepts for Reconciling Budgetary and Financial Accounting, paragraph 69.2, which requires collecting entities to provide in the other accompanying information any relevant estimates of the annual tax gap that become available as a result of federal government surveys or studies. Open. The note disclosure for dedicated collections should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires inclusion of condensed information about assets and liabilities showing investments in Treasury securities, other assets, liabilities due and payable to beneficiaries, other liabilities, and fund balance. Open. The note disclosure for dedicated collections should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires inclusion of condensed information on net cost and changes to fund balance, showing revenues by type (exchange/nonexchange), program expenses, other expenses, other financing sources, and other changes in fund balance. Open. The note disclosure for dedicated collections should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires inclusion of any revenues, other financing sources, or costs attributable to the fund under accounting standards but not legally allowable as credits or charges to the fund. Open. The note disclosure for Indian trust funds should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires a description of each fund's purpose, how the administrative entity accounts for and reports the fund, and its authority to use those collections. Open. The note disclosure for Indian trust funds should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires disclosure of the sources of revenue or other financing for the period and an explanation of the extent to which they are inflows of resources to the government or the result of intragovernmental flows. Open. The note disclosure for Indian trust funds should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires condensed information about assets and liabilities showing investments in Treasury securities, other assets, liabilities due and payable to beneficiaries, and other liabilities. Open. The note disclosure for Indian trust funds should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires condensed information on net cost and changes to fund balance, showing revenues by type (exchange/nonexchange), program expenses, other expenses, other financing sources, and other changes in fund balance. Open. The note disclosure for Indian trust funds should meet the requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, which requires disclosure of any revenues, other financing sources, or costs attributable to the fund under accounting standards, but not legally allowable as credits or charges to the fund. Open. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 31, which requires the program descriptions for Hospital Insurance and Supplementary Medical Insurance and an explanation of trends revealed in Chart 11: Estimated Railroad Retirement Income (Excluding Interest) and Expenditures 2002- 2076. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 24, which requires a description of statutory or other material changes, and the implications thereof, affecting the Medicare and Unemployment Insurance programs after the current fiscal year, and the implications thereof. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 25, which requires the significant assumptions used in making estimates and projections regarding the Black Lung and Unemployment Insurance programs. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 32(1)(b), which requires the total cash inflow from all sources, less net interest on intragovernmental borrowing and lending, and the total cash outflow to be shown in nominal dollars for the Hospital Insurance program. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 32(1)(a), which requires the narrative to accompany the cash flow data for Unemployment Insurance. The narrative should include the identification of any year or years during the projection period when cash outflow exceeds cash inflow, without interest, on intragovernmental borrowing or lending, and the presentation should include an explanation of material crossover points, if any, where cash outflow exceeds cash inflow and the possible reasons for this. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraphs 27(3)(h) and 27(3)(j), which require the estimates of the fund balances at the respective valuation dates of the social insurance programs (except Unemployment Insurance) to be included for each of the 4 preceding years. Only 1 year is shown. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 32(4), which requires individual program sensitivity analyses for projection period cash flow in present value dollars and annual cash flow in nominal dollars. The CFS includes only present value sensitivity analyses for Social Security and Hospital Insurance. Paragraph 32(4) states that, at a minimum, the summary should present Social Security, Hospital Insurance, and Supplementary Medical Insurance separately. Open. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 27(4)(a), which requires the individual program sensitivity analyses for Social Security and Hospital Insurance to include an analysis of assumptions regarding net immigration. Open. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, paragraph 27(4)(a), which requires the individual program sensitivity analysis for Hospital Insurance to include an analysis of death rates. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for social insurance should meet the requirements of SFFAS No. 17, Accounting for Social Insurance, by not including financial interchange income (intragovernmental income from Social Security) in the actuarial present value information for the Railroad Retirement Board. Closed. The fiscal year 2003 social insurance disclosures in the CFS provided the disclosures required in this recommendation. The note disclosure for nonfederal physical property included in Stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 87, which requires disclosure of the annual investment, including a description of federally owned physical property transferred to state and local governments. This information should be provided for the year ended on the balance sheet date as well as for each of the 4 preceding years. If data for additional years would provide a better indication of investment, reporting of the additional years' data is encouraged. Reporting should be at a meaningful category or level. Open. The note disclosure for nonfederal physical property included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 87, which requires a description of major programs involving federal investments in nonfederal physical property, including a description of programs or policies under which noncash assets are transferred to state and local governments. Open. The note disclosure for human capital included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, which requires a narrative description and the full cost of the investment in human capital for the year being reported on as well as the preceding 4 years (if full cost data are not available, outlay data can be reported). Open. The note disclosure for human capital included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, which requires the full cost or outlay data for investments in human capital at a meaningful category or level (e.g., by major program, agency, or department). Open. The note disclosure for human capital included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, which requires a narrative description of major education and training programs considered federal investments in human capital. Open. The note disclosure for research and development included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, which requires reporting of the annual investment made in the year ended on the balance sheet date as well as in each of the 4 years preceding that year. (As defined in this standard, "annual investment" includes more than the annual expenditure reported by character class for budget execution. Full cost shall be measured and accounted for in accordance with SFFAS No. 4, Managerial Cost Accounting Standards for the Federal Government.) If data for additional years would provide a better indication of investment, reporting of the additional years' data is encouraged. In those unusual instances when entities have no historical data, only current reporting year data need be reported. Reporting must be at a meaningful category or level, for example, a major program or department. Open. The note disclosure for research and development included in stewardship information should meet the requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, which requires a narrative description of major research and development programs. Open. The note disclosure for deferred maintenance should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraphs 83 and 84, which require inclusion of the method of measuring deferred maintenance for each major class of PP&E. Open. The note disclosure for deferred maintenance should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraphs 83 and 84, which require that if the condition assessment survey method of measuring deferred maintenance is used, the following should be presented for each major class of PP&E: (1) description of requirements or standards for acceptable operating condition, (2) any changes in the condition requirements or standards, and (3) asset condition and a range estimate of the dollar amount of maintenance needed to return the asset to its acceptable operating condition. Open. The note disclosure for deferred maintenance should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraphs 83 and 84, which require that if the total life-cycle cost method is used, the following should be presented for each major class of PP&E: (1) the original date of the maintenance forecast and an explanation for any changes to the forecast, (2) prior year balance of the cumulative deferred maintenance amount, (3) the dollar amount of maintenance that was defined by the professionals who designed, built, or managed the PP&E as required maintenance for the reporting period, (4) the dollar amount of maintenance actually performed during the period, (5) the difference between the forecast and actual maintenance, (6) any adjustments to the scheduled amounts deemed necessary by the managers of the PP&E and (7) the ending cumulative balance for the reporting period for each major class of asset experiencing deferred maintenance. Open. The note disclosure for deferred maintenance should meet the requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraphs 83 and 84, which require that if management elects to disclose critical and noncritical amounts, the disclosure is to include management's definition of these categories. Open. The note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs should meet the requirements of SFFAS No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to FASB standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the "risk assumed" approach. Open. 1. See “Agency Comments and Our Evaluation” section. 2. Treasury provided a detailed reconciliation that purports to show that prior period adjustments accounted for the majority of the differences we identified. The spreadsheet provided an expanded version of the information we had already taken into account in our review of the fiscal year 2003 reconciliation statement. Therefore, our view is unchanged. 3. As we stated last year as part of our fiscal year 2002 audit, we were not calling for Treasury to use federal agencies’ financial statements to prepare the Statement of Changes in Cash Balance. Instead, we recommended that Treasury collect certain information already reported in federal agencies’ audited financial statements and develop procedures that ensure consistency of the significant line items on the Statement of Changes in Cash Balance with the agency-reported information. As we stated in our fiscal year 2002 report, Treasury has expressed the belief that the information it maintains in its system is materially reliable. However, federal agencies also believe their amounts are materially reliable and are supported by unqualified audit opinions on their financial statements. 4. Our example is appropriate. As stated in this report, we found that the total operating cash amount reported in the Statement of Changes in Cash Balance did not link to the underlying agencies’ operating cash reported in their financial statements. Our analysis showed that Treasury reported operating cash in its own financial statements of $51 billion but reported only $35 billion of operating cash in the Statement of Changes in Cash Balance in the CFS. Treasury attributes the difference to time deposits and other cash items which are included in Treasury’s department wide financial statements as components of operating cash, but are reported in the CFS separately from operating cash. In that Treasury is the preparer of the CFS, we see this inconsistency as a relevant example. 5. As part of our audit of the fiscal year 2002 CFS, we found that 2 of the 30 federal agencies’ management representation letters we had reviewed had discrepancies between what the auditor found and what the agency represented in its management representation letter. Treasury needs to be aware of these types of discrepancies and their resolution in order to determine the effects, if any, on the representations made in the management representation letter for the CFS. 6. As part of our audit of the fiscal year 2002 CFS, we found that 8 of the 30 federal agencies’ management representation letters we had reviewed were not signed by the appropriate level of management. Treasury has a responsibility to determine that the agency management representation letters are signed by the highest-level agency officials that are responsible for and knowledgeable about the matters included in the agency management representation letter because Treasury is relying on federal agencies’ representations in the management representations letter for the CFS. 7. As part of our audit of the fiscal year 2002 CFS, we found that 25 of the 30 federal agencies’ management representation letters we had reviewed did not disclose the materiality thresholds used by management in determining items to be included in the letter. Treasury stated that the audit standards do not require these amounts to be included in the management representation letter. While we agree that the standards do not require the materiality amounts to be included, we require Treasury and OMB to include a materiality threshold in the management representation letter for the CFS. Therefore, without assessing the materiality thresholds used by federal agencies in their management representation letters, we are unsure as to how Treasury and OMB can ensure that the representations made to GAO at the governmentwide level are within the materiality thresholds they state in the management representation letter for the CFS. 8. Materiality is one of several tools the auditor uses to determine that the nature, timing, and extent of procedures are appropriate. Materiality is a matter of the auditors’ professional judgment, influenced by the needs of the reasonable person relying on the financial statements, and is not negotiated between the auditors and their clients. The management representation letter findings we reported as part of our fiscal year 2002 audit have also been communicated to agency auditors and we will continue to work with them to resolve these issues. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | For the past 7 years, since the first audit of the consolidated financial statements of the U.S. government (CFS), certain material weaknesses in internal control and financial reporting have resulted in conditions that have prevented GAO from expressing an opinion on the CFS. Specifically, GAO has reported that the federal government did not have adequate systems, controls, and procedures to properly prepare the CFS. In October 2003, GAO reported on weaknesses identified during the fiscal year 2002 audit regarding financial reporting procedures and internal control over the process for preparing the CFS. The purpose of this report is to (1) discuss additional weaknesses identified during the fiscal year 2003 audit, (2) recommend improvements to address those weaknesses, and (3) provide the status of corrective actions to address the 129 open recommendations contained in the October 2003 report. Many of the weaknesses in internal control that have contributed to GAO's continuing disclaimers of opinion on the CFS were identified by agency financial statement auditors during their audits of federal agencies' financial statements and have been reported in detail with recommendations to agencies in separate reports. However, some of the weaknesses GAO reported were identified during GAO's tests of the Department of the Treasury's process for preparing the CFS. Such weaknesses impair the federal government's ability to ensure that the CFS is consistent with the underlying audited agency financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles. In addition to the compilation and reporting weaknesses that GAO reported in October 2003, GAO found additional weaknesses in the compilation and reporting process in the following seven areas during the fiscal year 2003 CFS audit: (1) allocation methodology for certain costs in the statement of net cost, (2) statement of changes in cash balance from unified budget and other activities, (3) reporting of criminal debt, (4) recording and disclosing contingencies, (5) directly linking audited federal agency financial statements to the CFS, (6) prior period adjustments, and (7) conformity with U.S. generally accepted accounting principles. GAO found that with respect to four required disclosure areas, information was either not included in the CFS or was not presented in conformity with U.S. generally accepted accounting principles. As a result of this and certain other weaknesses GAO identified, GAO was unable to determine if the missing information was material to the CFS. The four disclosure areas were (1) federal employee and veteran benefits payable, (2) environmental and disposal liabilities, (3) research and development, and (4) deferred maintenance. GAO's October 2003 report contained 129 recommendations. Of those recommendations, 118 remained open as of February 20, 2004, the end of GAO's fieldwork for the fiscal year 2003 CFS audit. |
The TANF block grant was created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) and was designed to give states the flexibility to provide both traditional welfare cash assistance benefits as well as a variety of other benefits and services to meet the needs of low-income families and children. TANF has four broad goals: (1) provide assistance to needy families so that children may be cared for in their own homes or homes of relatives; (2) end dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) prevent and reduce out-of- wedlock pregnancies; and (4) encourage two-parent families. Within these goals, states have responsibility for designing, implementing, and administering their welfare programs to comply with federal guidelines, as defined by federal law and HHS. In creating TANF, the federal government significantly changed its role in financing welfare programs in states. PRWORA ended low-income families’ entitlement to cash assistance by replacing the Aid to Families with Dependent Children (AFDC) program— for which the federal grant amount was based on the amount of state spending—with the TANF block grant, a $16.5 billion per year fixed federal funding stream to states. PRWORA coupled the block grant with an MOE provision, which requires states to maintain a significant portion of their own historic financial commitment to their welfare programs as a condition of receiving their full federal TANF allotments. Importantly, with the fixed federal funding stream, states assume greater fiscal risks in the event of a recession or increased program costs. However, in acknowledgment of these risks, PRWORA also created a TANF Contingency Fund that states could access in times of economic distress. Similarly, during the recent economic recession, the federal government created a $5 billion Emergency Contingency Fund for state TANF programs through the American Recovery and Reinvestment Act of 2009, available in fiscal years 2009 and 2010. The most recent data available, for fiscal year 2010, show that the federal government and states spent almost $36 billion on benefits and services meeting one or more of the TANF goals. In that year, states provided, on average, about 1.9 million families per month with ongoing cash assistance, including about 800,000 families in which the children alone received benefits. This represents a significant drop from the more than 3 million families receiving cash assistance when states implemented TANF in fiscal year 1997. In addition, states provide a broad range of services to other families in need not included in the welfare caseload data. The total number of families assisted is not known, as we have noted in our previous work. These allowable services under TANF can generally include any spending reasonably deemed to meet one or more of the four broad goals of TANF, and can include one-time cash payments, work and training activities, work supports such as child care and transportation, efforts to promote two-parent families or marriage, and child welfare services, among others. When TANF began, cash assistance represented the largest spending category (73 percent in fiscal year 1997). In contrast, cash assistance spending in fiscal year 2010 accounted for 30 percent of total TANF spending. Reducing dependence on government benefits through job preparation and employment is a key goal of TANF, and PRWORA identified the work participation rate as one of the federal measures of state TANF programs’ performance. This rate is generally calculated as the proportion of work- eligible TANF cash assistance recipients engaged in allowable work activities. for ensuring that generally at least 50 percent of all families receiving TANF cash assistance benefits participate in one or more of the allowable work activities for a specified number of hours each week. TANF provisions include other features to help emphasize the importance of work and the temporary nature of assistance, such as 60-month time limits on the receipt of aid for many families. 42 U.S.C. § 607. The 12 work activities are: unsubsidized employment, subsidized private sector employment, subsidized public sector employment, work experience (if sufficient private sector employment is not available), on-the-job training, job search and job readiness assistance, community service programs, vocational educational training, job skills training directly related to employment, education directly related to employment (for recipients who have not received a high school diploma or certificate of high school equivalency), satisfactory attendance at secondary school or in a course of study leading to a certificate of general equivalence (for recipients who have not completed secondary school or received such a certificate), and the provision of child care services to an individual who is participating in a community service program. 42 U.S.C. § 607(d). The preamble to the final rule issued by HHS in 1999 noted that the MOE cost-sharing arrangement reflected Congress’ recognition that state financial participation is essential for the success of welfare reform. The preamble to this final rule also noted that Congress wanted states to be active partners in the welfare reform process. These requirements are an important element of TANF—if a state fails to meet its MOE requirement for any fiscal year, HHS is required by law to reduce dollar- for-dollar the amount of a state’s basic TANF grant for the following fiscal year. Maintenance of effort requirements are sometimes found in federal grant programs to prevent states from substituting federal for state dollars. Such provisions can help ensure that federal block grant dollars are used for the broad program area intended by the Congress, in this case the four broad TANF purposes. Without such provisions, federal funds ostensibly provided for these broad areas could, in effect, be transformed into general fiscal relief for the states, as states could use some or all of their federal block grants to replace their own money invested in the program area. To the extent that this occurs, the ultimate impact of these federal dollars would be to increase state spending in other programs, reduce taxes, or some combination of both. A maintenance of effort requirement brings its own challenges—it can be complex to monitor and may lock states into meeting minimum spending levels that may no longer be warranted given changing conditions. Under TANF, while states have significant flexibility in how to spend their own money, several requirements guide the use of these state funds, including how much, for whom, and for what. Each state’s amount of MOE is generally based on fiscal year 1994 state spending for a specific set of programs. The 1996 welfare reform law consolidated and replaced programs under which the amount of federal spending was often based on state spending levels, and considerable state dollars contributed to these pre-TANF programs. Figure 1 shows the federal programs with related state spending that were included in establishing the fixed annual amount of the TANF block grant and state maintenance-of-effort level for each state. The required percentages of these previous state spending levels vary under different conditions: 80 percent—To receive its federal TANF funds, a state must generally spend state funds in an amount equal to at least 80 percent of the amount it spent on welfare and related programs in fiscal year 1994. 75 percent—If a state meets its minimum work participation rate requirements, then it generally need expend only 75 percent of the amount it spent in fiscal year 1994. 100 percent—To receive contingency funds, a state must expend 100 percent of that fiscal year 1994 amount. In addition to its own spending, a state may count toward its MOE certain in-kind or cash expenditures by third parties, such as nonprofit organizations, as long as the expenditures meet other MOE requirements, including those related to eligible families and allowable activities, discussed below. In addition, an agreement must exist between the state and the third party allowing the state to count the expenditures toward its MOE. Generally, to count toward a state’s MOE, expenditures must be for “eligible families,” that is, families who: include a child living with his or her custodial parent or other adult caretaker relative (or a pregnant woman); and meet the financial criteria, such as income and resources limits, established by a state for the particular service or assistance as described in its TANF plan. Each state is required to prepare and provide a biennial TANF plan describing its programs to HHS. Generally, expenditures for eligible families in these areas may count toward MOE: educational activities to increase self-sufficiency, job training and work (except for activities or services that a state makes generally available to its residents without cost and without regard to their income); cash assistance; child care assistance; other activities considered in keeping with a TANF purpose. certain administrative costs; and These expenditures may be made on behalf of families in a state’s cash welfare program or for other eligible families through other state programs or initiatives. However, state-funded benefits, services, and activities that were not a part of the pre-reform programs generally may count as MOE only to the extent that they exceed the fiscal year 1995 level of expenditures in the programs. This is referred to as the “new spending” test. For example, if a state has currently spent its own funds on eligible families on an allowable activity, such as a refundable earned income tax credit, it may count toward its MOE only the current amount that exceeds that program’s expenditures in fiscal year 1995. State MOE levels remained stable for many years and then increased more recently for several reasons. As shown in figure 2, until fiscal year 2006, MOE levels remained relatively stable, hovering around the 80 percent required minimum or the reduced rate of 75 percent for states that met their work participation rates. From fiscal years 2006 through 2009, they increased each year. In a 2001 report, we examined issues related to the new federal-state fiscal partnership under TANF, noting several issues related to TANF and MOE spending rules. We found at that time that the MOE requirement, in many cases, limited the extent to which states used their federal funds to replace state funds—an intended role for MOE.situation in which many state officials said they were spending more than It also led to a might be expected in the face of the large caseload drop in the earliest years of TANF. However, states have additional flexibility in making spending decisions. While states must meet MOE requirements, federal TANF funds may be “saved for a rainy day,” providing states additional flexibility in their budget decisions. they drew down to meet increasing needs in the recent economic downturn. Moreover, states have flexibility to provide a wide variety of services—as long as they are in keeping with the four broad purposes of TANF—to those on the cash welfare rolls and to other eligible families. 42 U.S.C. § 604(e). Each year, a state may in effect reserve some of its federal TANF funds to help it meet increased needs and costs in later years. A state’s unspent funds can “accumulate” as a type of “rainy day fund” for its future use. Since TANF was created in 1996, states have been permitted to spend prior year TANF block grant funds on assistance—a category that includes cash benefits and supportive services for families receiving these benefits. However, the Recovery Act increased states’ flexibility to spend prior year TANF block grant funds on all TANF-allowable benefits and services. by almost $2 billion, and much of the increase in expenditures was in areas that had temporarily been broadened. Many states claimed additional MOE to help them meet the work participation rates, as discussed in the next section. In recent years, some states have used their MOE spending to help them meet TANF work participation rates. Generally, states are held accountable for ensuring that at least 50 percent of all families receiving TANF cash assistance and considered work-eligible participate in one or more of the federally defined work activities for a specified number of hours each week. However, most states have not engaged that many recipients in work activities on an annual basis. For example, in fiscal year 2009, the most recent year for which data are available, less than 50 percent of TANF cash assistance families participated in work activities for the specified number of hours each week in 44 states, according to HHS. However, various policy and funding options in federal law and regulations allowed most of these states to meet their work participation rates. Factors that influenced states’ abilities to meet the work participation rates included not only the number of families receiving TANF cash assistance who participated in work activities, but also decreases in the number of families receiving TANF cash assistance, and state MOE spending beyond what is required, for example. Since TANF was created, the factor that states have commonly relied on to help them meet their required work participation rates is the caseload reduction credit. Specifically, decreases in the numbers of families receiving TANF cash assistance over a specified period are accounted for in each state’s caseload reduction credit, which essentially lowers the states’ required work participation rate from 50 percent. For example, if a state’s caseload decreases by 20 percent during the relevant time period, the state receives a caseload reduction credit equal to 20 percentage points, which results in the state work participation rate requirement being adjusted from 50 to 30 percent. In each year since TANF was created, many states have used caseload declines to help them lower the required work participation rates. For example, in fiscal year 2009, 38 of the 45 states that met their required work participation rates for all TANF families did so in part because of their caseload decreases (see fig. 3). However, in recent years, the Congress updated the base year for assessing the caseload reduction credit, and as a result, some states also began to rely on state MOE expenditures to increase their caseload reduction credit, which lowers their required work participation rates. Under federal regulations, if states spend in excess of their required MOE amount, they are allowed to correspondingly increase their caseload reduction credits. By doing so, a state reduces its required work participation rate. In fiscal year 2009, 32 of the 45 states that met their required work participation rates for all TANF families claimed excess state MOE spending toward their caseload reduction credits. Sixteen of these states would not have met their rates without claiming these expenditures (see fig. 3). Among the states that needed to rely on excess state MOE spending to meet their work participation rates, most relied on these expenditures to add between 1 and 20 percentage points to their caseload reduction credits (see fig. 4). MOE is now playing an expanded role in TANF programs, as many states’ excess MOE spending has helped them meet work participation rates. While one state had used MOE expenditures toward its caseload reduction credit before fiscal year 2007, over half of the states (27) relied on these expenditures to increase their credits and help them meet their required work participation rates in one or more years between fiscal years 2007 and 2009. States may be making programmatic and budgetary decisions to use excess MOE to help them avoid penalties for failure to meet participation rates and possibly losing funds. In our previous work, states have cited concerns about difficulties in engaging a sufficient number of cash recipients in required activities for the required number of hours for several reasons, including limits on the types of activities that count, limited resources for developing and providing appropriate work activities, a lack of jobs particularly during tough economic times, and the characteristics of some cash assistance recipients that make it difficult for them to engage in countable work activities. However, this greater emphasis on the use of MOE increases the importance of understanding whether effective accountability measures are in place to ensure MOE funds are in keeping with requirements. In our 2001 report, some states expressed concerns that this MOE provision could become difficult to enforce. In doing that work, we spoke to many auditors who were in the midst of developing audit plans to address compliance with the new spending test. Several told us that developing these plans was relatively straightforward: the auditor should simply be able to establish a baseline for all the MOE expenditures the state was using and then trace those programs back to 1995 and certify that spending used for MOE was indeed new spending. However, we also noted that these plans could become more complex if states frequently changed the expenditures they were counting from one year to the next (i.e., changed the programs for which they needed baselines). In one state at that time, we were told that all expenditure data were archived after 5 years, and that auditing the annual certification would be especially difficult and time consuming if the state changes the programs it uses to meet its MOE requirement from year to year. We expect that several factors, such as changes in what MOE expenditures states may count, growth in some particular spending areas, as well as the growth in MOE spending overall may have greatly increased the complexities involved in tracking MOE. HHS provided information related to In its final rule published in 1999,its plans for monitoring state MOE and noted that states recognize that they are ultimately accountable for their expenditure claims. HHS stated that states are audited annually or biennially and compliance with the basic MOE provisions is part of the audit. HHS added that it would use the results of the audits, together with its own analysis of state-provided data—required state quarterly expenditure reports and annual descriptive reports on MOE activities—to assess states’ compliance. It also said it might undertake additional state reviews based on complaints that arise or requests from the Congress. We have not reviewed existing efforts to monitor MOE and cannot comment on their effectiveness. However, the extent to which states have relied on these expenditures to help them meet work participation rates as well as meeting MOE generally highlights the importance of having reasonable assurances that current oversight is working. If MOE claims do not actually reflect maintaining or increasing service levels, low- income families and children may not be getting the assistance they need in the current environment and federal funds may not be used in the most efficient manner. MOE provisions are important but not without implementation and oversight challenges. Based on our previous work on federal grant design as well as more recent work on some MOE provisions under the Recovery Act, it is clear that such provisions are important mechanisms for helping ensure that federal spending achieves its intended effect. With TANF, what is at stake are billions of federal and state dollars that together represent a federal-state partnership to help needy families provide for their children and take steps toward economic independence. The work also points to administrative, fiscal, and accountability challenges in implementing MOE provisions, both from federal and state perspectives. While MOE provisions may be imperfect tools, with appropriate attention to design, implementation, and monitoring issues, such provisions are one way to help strike a balance between the potentially conflicting objectives of increasing state and local flexibility while attaining certain national objectives, including efficient use of federal resources in today’s fiscal environment. We provided drafts of the reports we drew on for this testimony to HHS for its review, and copies of the agency’s written responses can be found in the appendixes of the relevant reports. We also provided HHS a draft of this testimony, and officials provided technical comments that we incorporated as appropriate. Chairman Davis, Ranking Member Doggett, and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions you may have. For questions about this statement, please contact Kay E. Brown at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include James Bennett, Robert Campbell, Rachel Frisk, Alex Galuten, Gale Harris, Tom James, Jean McSween, Ronni Schwartz, and Michelle Loutoo Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The $16.5 billion TANF block grant, created in 1996, is one of the key federal funding streams targeted to assist low-income families. While the block grant provides states with a fixed amount of federal dollars annually, it also includes state MOE requirements, which require states to maintain a significant portion of their own historic financial commitment to welfare-related programs. Over the last 15 years, this federal-state partnership has seen multiple program and fiscal changes, including a dramatic drop in the number of families receiving monthly cash assistance, as well as two economic recessions. To provide information for its potential extension or reauthorization, this testimony draws primarily on previous GAO work to focus on (1) the key features of the state MOE requirements and (2) how the role of state MOE spending has changed over time. To address these issues, GAO relied on its prior work on TANF block grant and state MOE spending issued between 2001 and 2010, including the May 2010 report examining how state MOE spending affects state TANF programs work participation rates. To develop the spending-related findings in this body of work, GAO reviewed relevant federal laws, regulations, and guidance, state TANF data reported to the U.S. Department of Health and Human Services (HHS), and related financial data from selected states. GAO also interviewed relevant officials from HHS and selected states. The Temporary Assistance for Needy Families (TANF) block grants maintenance of effort (MOE) provisions include specified state spending levels and general requirements on the use of funds. For example, these provisions generally require that each state spend at least 80 percent (75 percent if the state meets certain performance standards) of the amount it spent on welfare and related programs in fiscal year 1994, before TANF was created. If a state does not meet its MOE requirements in any fiscal year, the federal government will reduce dollar-for-dollar the states federal TANF grant in the following year. In order to count state spending as MOE, funds must be spent on benefits and services to families with children that have incomes and resources below certain state-defined limits. Such benefits and services must generally further one of TANFs purposes, which broadly focus on providing financial assistance to needy families; promoting job preparation, work, and marriage; reducing out-of-wedlock births; and encouraging the formation of two-parent families. Within these broad goals, states have significant flexibility to design programs and spend their funds to meet families needs. Total MOE spending reported by states remained relatively stable around the required minimum spending level of $11 billion through fiscal year 2005, and then increased to about $4 billion higher than this minimum in fiscal years 2009 and 2010. Several reasons likely accounted for these increases, including states reliance on MOE spending to help them meet TANF work participation rates. Work participation rates identify the proportion of families receiving monthly cash assistance that participate in allowable work activities for a specified number of hours each week. Federal law generally requires that at least 50 percent of families meet the work requirements; however, most states have engaged less than 50 percent of families in required activities in each year since TANF was created, according to HHS data. Various policy and funding options in federal law and regulations, including credit for state MOE expenditures that exceed required spending levels, have allowed most states to meet the rate requirements even with smaller percentages of families participating. States generally began relying on MOE spending to get credit toward meeting TANF work participation rates in fiscal year 2007 because of statutory changes to the rate requirements enacted in 2006. For example, for fiscal year 2009, the most recent data available, 16 of the 45 states that met the TANF work participation rate would not have done so without the credit they received for excess state MOE spending. The expanded role of MOE in state TANF programs highlights the importance of having reasonable assurance that MOE spending reflects the intended commitment to low-income families and efficient use of federal funds. GAOs previous work makes clear that MOE provisions are often difficult to administer and oversee, but can be important tools for helping ensure that federal spending achieves its intended effect. This work also points out that with appropriate attention to design, implementation, and monitoring issues, such provisions are one way to help strike a balance between the potentially conflicting objectives of increasing state and local flexibility while attaining certain national objectives. |
The Improper Payments Information Act of 2002 (IPIA)—as amended by the Improper Payments Elimination and Recovery Act of 2010 (IPERA) and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA)—requires federal executive branch agencies to (1) review all programs and activities, (2) identify those that may be susceptible to significant improper payments,amount of improper payments for those programs and activities, (4) implement actions to reduce improper payments and set reduction targets, and (5) report on the results of addressing the foregoing requirements. IPERA also established a requirement for agency inspectors general (IG) to report annually on agencies’ compliance with criteria listed in IPERA. Under Office of Management and Budget (OMB) implementing guidance, these reports should be completed within 120 (3) estimate the annual days of the publication of the federal agencies’ annual PARs or AFRs. IPERIA also enacted into law a Do Not Pay (DNP) initiative, elements of which were already being developed under executive branch authority. DNP is a web-based, centralized data-matching service that allows agencies to review multiple databases to determine a recipient’s award or payment eligibility prior to making payments. In addition to the laws and guidance noted above, the Disaster Relief Appropriations Act of 2013 requires that all funding received under the act be deemed susceptible to significant improper payments and consequently requires agencies to estimate improper payments, implement corrective actions, and report on their results for these funds. OMB continues to play a key role in the oversight of government-wide improper payments. OMB has set a goal of reaching a government-wide improper payment error rate of 3 percent or less by the end of fiscal year 2016. Further, OMB has established guidance for federal agencies on reporting, reducing, and recovering improper payments as required by IPIA and IPERA and on protecting privacy while reducing improper IPERIA requires that OMB issue payments with the DNP initiative. guidance to agencies for improving estimates of improper payments. OMB has reported that it plans to revise its guidance related to improper payments. Office of Management and Budget, Revised, Financial Reporting Requirements, OMB Circular No. A-136 (Oct. 21, 2013); Protecting Privacy while Reducing Improper Payments with the Do Not Pay Initiative, OMB Memorandum M-13-20 (Washington, D.C.: Aug. 16, 2013); Issuance of Revised Parts I and II to Appendix C of OMB Circular A-123, OMB Memorandum M-11-16 (Washington, D.C.: Apr. 14, 2011); Increasing Efforts to Recapture Improper Payments by Intensifying and Expanding Payment Recapture Audits, OMB Memorandum M-11-04 (Washington, D.C.: Nov. 16, 2010); and Issuance of Part III to OMB Circular A-123, Appendix C, OMB Memorandum M-10-13 (Washington, D.C.: Mar. 22, 2010). Federal agency improper payment estimates totaled $105.8 billion in fiscal year 2013, a decrease of $1.3 billion from the prior year’s revised estimate of $107.1 billion. The decrease in the fiscal year 2013 estimate is attributed primarily to a decrease in program outlays for the Department of Labor’s (DOL) Unemployment Insurance program and decreases in reported error rates for fiscal year 2013 for the Department of Health and Human Services’ (HHS) Medicaid and Medicare Advantage (Part C) programs. The $105.8 billion in estimated federal improper payments reported for fiscal year 2013 was attributable to 84 programs spread among 18 agencies. Five of these 84 programs account for most of the $105.8 billion of reported improper payments. Specifically, these five programs accounted for about $82.9 billion or 78 percent of the total estimated improper payments agencies reported for fiscal year 2013. Table 1 lists the five programs with the highest reported improper payment estimates for fiscal year 2013. OMB reported a government-wide improper payment error rate of 3.5 percent of program outlays in fiscal year 2013 when including the Department of Defense’s (DOD) Defense Finance and Accounting Service (DFAS) Commercial Pay program, a decrease from 3.7 percent in fiscal year 2012. When excluding the DFAS Commercial Pay program, the reported government-wide error rate was 4.0 percent of program outlays in fiscal year 2013 compared to the revised 4.3 percent reported in fiscal year 2012. In May 2013, we reported on major deficiencies in DOD’s process for estimating fiscal year 2012 improper payments in the DFAS Commercial Pay program and recommended that DOD (1) develop key quality assurance procedures to ensure the completeness and accuracy of sampled populations and (2) revise its sampling procedures to meet OMB guidance and generally accepted statistical standards and produce a statistically valid error rate and dollar estimate with appropriate confidence intervals. According to its fiscal year 2013 AFR, DOD is reevaluating its sampling methodology for fiscal year 2014 for the DFAS Commercial Pay program based on our recommendations. Consequently, the fiscal year 2013 improper payment estimate for the DFAS Commercial Pay program may not be reliable. Additionally, in fiscal year 2013, federal agencies reported improper payment error rates for seven risk-susceptible programs—accounting for more than 50 percent of the government-wide improper payment estimate—that exceeded 10 percent. As shown in table 2, the seven programs with error rates exceeding 10 percent ranged from 10.1 percent to 25.3 percent. Under IPERA, an agency reporting an improper payment rate of 10 percent or greater for any risk-susceptible program or activity must submit a plan to Congress describing the actions that the agency will take to reduce improper payment rates below 10 percent. Since the implementation of IPIA in 2004, federal agencies have continued to identify new programs or activities as risk susceptible and to report estimated improper payment amounts. Federal agencies have also identified programs or activities that they have determined to no longer be risk susceptible and therefore did not report improper payment estimates for these programs. For example, with OMB approval an agency can obtain relief from estimating improper payments if the agency has reported improper payments under the threshold for significant improper payments at least 2 consecutive years. Consequently, the specific programs included in the government-wide improper payment estimate may change from year to year. For example, a net of 10 additional programs were added to the government-wide estimate by OMB in fiscal year 2013 when compared to fiscal year 2012.Department of Education’s improper payment estimate for the Direct Loan program, approximately $1.1 billion, was included in the government-wide improper payment estimate for the first time in fiscal year 2013. We view Most notably, the these agencies’ efforts as a positive step toward increasing the transparency of the magnitude of improper payments. In addition, agencies have continued efforts to recover improper payments, for example through recovery audits. OMB reported that government-wide, agencies recovered over $22 billion in overpayments through recovery audits and other methods in fiscal year 2013. In our fiscal year 2013 audit of the Financial Report of the United States Government, we reported the issue of improper payments as a material weakness in internal control because the federal government is unable to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them.the agency level, we also found that existing internal control weaknesses—such as financial system limitations and information system control weaknesses—heighten the risk of improper payments occurring. We found that not all agencies have developed improper payment estimates for all of the programs and activities they identified as susceptible to significant improper payments. Specifically, four federal agencies did not report fiscal year 2013 estimated improper payment amounts for four risk-susceptible programs. For example, HHS’s fiscal year 2013 reporting cited statutory limitations for its state-administered Temporary Assistance for Needy Families (TANF) program, which prohibited it from requiring states to participate in developing an improper payment estimate for the TANF program. Despite these limitations, HHS reported that the agency has taken actions to assist states in reducing improper payments, such as providing guidance related to appropriate uses of TANF program funds. For fiscal year 2013, the TANF program reported outlays of about $16.5 billion. In addition, two programs that reported estimates in fiscal year 2013 were not included in the government-wide totals because their estimation methodologies were not approved by OMB. The two excluded programs were the Department of Transportation’s High-Speed Intercity Passenger Rail program, with fiscal year 2013 outlays of $2.3 billion, and the Railroad Retirement Board’s Railroad Unemployment Insurance program, with fiscal year 2013 outlays of $119.2 million. Compliance with statutory requirements is another challenge for some federal agencies. For fiscal year 2013, two agency auditors reported on compliance issues with IPIA and IPERA as part of their 2013 financial statement audits. Specifically, auditors of the Department of Agriculture (USDA) reported noncompliance with the requirements of IPERA regarding the design of program internal controls related to improper payments. HHS auditors reported that, as previously noted, HHS did not report an improper payment estimate for its TANF program for fiscal year 2013. In addition to noncompliance reported in financial statement audits, various IGs reported deficiencies related to compliance with the criteria listed in IPERA for fiscal year 2013 at their respective federal agencies, including risk-susceptible programs that did not have reported improper payment estimates, estimation methodologies that were not statistically valid, and risk assessments that may not accurately assess the risk of improper payment. As reported in our March 2014 update to items identified in our annual reports on fragmentation, overlap, and duplication, to determine the full extent of improper payments government-wide and to more effectively recover and reduce them, continued agency attention is needed to (1) identify programs susceptible to improper payments, (2) develop reliable improper payment estimation methodologies, (3) report on improper payments as required, and (4) implement effective corrective actions based on root cause analysis. As previously reported, there are a number of strategies that can help agencies in reducing improper payments, including analyzing the root causes of improper payments and implementing effective preventive and detective controls. Designed and implemented effectively, these strategies could help advance the federal government’s efforts to reduce improper payments. Agencies cited a number of causes for the estimated $105.8 billion in reported improper payment estimates for fiscal year 2013, including insufficient documentation, incorrect calculations, and duplicate payments. According to OMB guidance, agencies are required to classify the root causes of estimated improper payments into three general categories for reporting purposes: (1) documentation and administrative errors, (2) authentication and medical necessity errors, and (3) verification errors.improper payments for their respective programs in their fiscal year 2013 financial reports using these categories, a more detailed analysis beyond these general categories regarding the root causes can help agencies to identify and implement more effective preventive, detective, and corrective actions in the various programs. For example, in its fiscal year 2013 AFR, HHS reported diagnosis coding errors as a root cause of improper payments in its Medicaid program and cited corrective actions While some agencies reported the causes of related to provider communication and education. OMB has reported plans to develop more granular categories of improper payments in an upcoming revision to its guidance. Implementing strong preventive controls can serve as the frontline defense against improper payments. Proactively preventing improper payments increases public confidence in the administration of benefit programs and avoids the difficulties associated with the “pay and chase” aspects of recovering overpayments. Many agencies and programs are in the process of implementing preventive controls to avoid improper payments, including overpayments and underpayments.controls may involve a variety of activities such as up-front validation of eligibility, predictive analytic tests, and training programs. Further, addressing program design issues that are a factor in causing improper payments is an effective preventive strategy to be considered. The following are examples of preventive strategies, some of which are currently under way. Up-front eligibility validation through data sharing. Data sharing allows entities that make payments—to contractors, vendors, participants in benefit programs, and others—to compare information from different sources to help ensure that payments are appropriate. When effectively implemented, data sharing can be particularly useful in confirming initial or continuing eligibility of participants in benefit programs and in identifying improper payments that have already been made. Analyses and reporting on the extent to which agencies are participating in data- sharing activities, and additional data-sharing efforts that agencies are currently pursuing or would like to pursue can help to advance the federal government’s efforts to reduce improper payments. One example of data sharing is agencies’ use of the Do Not Pay (DNP) initiative. DNP is a web-based, centralized data-matching service that allows agencies to review multiple databases to determine a recipient’s award or payment eligibility prior to making payments. IPERIA requires entities to review prepayment and preaward procedures and ensure a thorough review of available databases to determine program or award eligibility before the release of any federal funds. IPERIA lists five databases that should be included in the DNP initiative and allows for the inclusion of other databases designated by OMB in consultation with the appropriate agencies. In August 2013, the Director of OMB issued Memorandum No. M-13-20 (M-13-20), Protecting Privacy while Reducing Improper Payments with the Do Not Pay Initiative. As required by IPERIA, M-13-20 sets forth implementation guidance for the DNP initiative to help ensure that the federal government’s efforts to reduce improper payments comply with privacy laws and policies. Predictive analytic technologies. The analytic technologies used by HHS’s Centers for Medicare & Medicaid Services (CMS) are examples of preventive techniques that may be useful for other programs to consider. The Small Business Jobs Act of 2010 requires CMS to use predictive modeling and other analytic techniques—known as predictive analytic technologies—both to identify and to prevent improper payments under the Medicare Fee-for-Service program. technologies are to be used to analyze and identify Medicare provider networks, billing patterns, and beneficiary utilization patterns and detect those that represent a high risk of fraudulent activity. Through such analysis, unusual or suspicious patterns or abnormalities can be identified and used to prioritize additional review of suspicious transactions before payment is made. Pub. L. No. 111-240, § 4241 (Sept. 27, 2010). detect improper payments and training providers or beneficiaries on program requirements. For example, in its fiscal year 2013 AFR, HHS reported that it has offered training through its Medicaid Integrity Institute to over 4,000 state employees and officials from fiscal years 2008 through 2013. Program design review and refinement. To the extent that provider enrollment and eligibility verification problems are identified as a significant root cause in a specific program, agencies may look to establish enhanced controls in this area. For example, CMS has taken steps to strengthen standards and procedures for Medicare provider enrollment to help reduce the risk of providers intent on defrauding or abusing the program. Further, exploring whether certain complex or inconsistent program requirements—such as eligibility criteria and requirements for provider enrollment—contribute to improper payments may lend insight to developing effective strategies for enhancing compliance and may identify opportunities for streamlining or changing eligibility or other program requirements. Although strong preventive controls remain the frontline defense against improper payments, effective detection techniques can help to quickly identify and recover those overpayments that do occur. Detection activities play a significant role not only in identifying improper payments but also in providing data on why these payments were made and, in turn, highlighting areas that need strengthened prevention controls. The following are examples of key detection techniques. Data mining. Data mining is a computer-based control activity that analyzes diverse data for relationships that have not previously been discovered. The central repository of data commonly used to perform data mining is called a data warehouse. Data warehouses store tables of historical and current information that are logically grouped. As a tool in managing improper payments, applying data mining to a data warehouse allows an organization to efficiently query the system to identify potential improper payments, such as multiple payments for an individual invoice to an individual recipient on a certain date, or to the same address. For example, CMS has established One Program Integrity, a web-based portal intended to provide CMS staff and contractors with a single source of access to Medicare and other data needed to help detect improper payments as well as tools for analyzing those data. Recovery auditing. While internal control should be maintained to help prevent improper payments, recovery auditing is used to identify and recover overpayments. IPERA requires agencies to conduct recovery audits, if cost effective, for each program or activity that expends $1 million or more annually. In its fiscal year 2013 AFR, HHS reported that the Medicare Fee-for-Service recovery audit program identified approximately $4.2 billion and recovered $3.7 billion in overpayments by the end of the fiscal year. Medicare recovery audit contractors are paid a contingency fee based on both the percentage of overpayments collected and underpayments identified. It is important to note that some agencies have reported statutory or regulatory barriers that affect their ability to pursue recovery auditing. For example, in its fiscal year 2013 AFR, USDA reported that Section 281 of the Department of Agriculture Reorganization Act of 1994 precluded the use of recovery auditing techniques because Section 281 provides that 90 days after the decision of a state, a county, or an area committee is final, no action may be taken to recover the amounts found to have been erroneously disbursed as a result of the decision unless the participant had reason to believe that the decision was erroneous. This statute is commonly referred to as the Finality Rule, and according to USDA, it affects the Commodity Credit Corporation’s ability to recover improper payments. IPERA allows agencies to use up to 25 percent of funds recovered, net of recovery costs, under a payment recapture audit program, including providing a portion of funding to state and local governments. improper payments. Incentives and penalties can be helpful to create management reform and to ensure adherence to performance standards. Chairman Mica, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For more information regarding this testimony, please contact Beryl H. Davis, Director, Financial Management and Assurance, at (202) 512-2623 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony included Phillip McIntyre (Assistant Director), James Healy, and Ricky A. Perry, Jr. Medicare Fraud: Further Actions Needed to Address Fraud, Waste, and Abuse. GAO-14-712T. Washington, D.C.: June 25, 2014. Medicare: Further Action Could Improve Improper Payment Prevention and Recoupment Efforts. GAO-14-619T. Washington, D.C.: May 20, 2014. Medicaid Program Integrity: Increased Oversight Needed to Ensure Integrity of Growing Managed Care Expenditures. GAO-14-341. Washington, D.C.: May 19, 2014. School-Meals Programs: USDA Has Enhanced Controls, but Additional Verification Could Help Ensure Legitimate Program Access. GAO-14-262. Washington, D.C.: May 15, 2014. Financial Audit: U.S. Government’s Fiscal Years 2013 and 2012 Consolidated Financial Statements. GAO-14-319R. Washington, D.C.: February 27, 2014. Social Security Death Data: Additional Action Needed to Address Data Errors and Federal Agency Access. GAO-14-46. Washington, D.C.: November 27, 2013. Disability Insurance: Work Activity Indicates Certain Social Security Disability Insurance Payments Were Potentially Improper. GAO-13-635. Washington, D.C.: August 15, 2013. Farm Programs: USDA Needs to Do More to Prevent Improper Payments to Deceased Individuals. GAO-13-503. Washington, D.C.: June 28, 2013. DOD Financial Management: Significant Improvements Needed in Efforts to Address Improper Payment Requirements. GAO-13-227. Washington, D.C.: May 13, 2013. Medicaid: Enhancements Needed for Improper Payments Reporting and Related Corrective Action Monitoring. GAO-13-229. Washington, D.C.: March 29, 2013. Financial Audit: U.S. Government’s Fiscal Years 2012 and 2011 Consolidated Financial Statements. GAO-13-271R. Washington, D.C.: January 17, 2013. Foster Care Program: Improved Processes Needed to Estimate Improper Payments and Evaluate Related Corrective Actions. GAO-12-312. Washington, D.C.: March 7, 2012. Improper Payments: Moving Forward with Governmentwide Reduction Strategies. GAO-12-405T. Washington, D.C.: February 7, 2012. Government Operations: Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Over the past decade, GAO has issued numerous reports and testimonies highlighting improper payment issues across the federal government as well as at specific agencies. The Improper Payments Information Act of 2002, as amended by the Improper Payments Elimination and Recovery Act of 2010 and the Improper Payments Elimination and Recovery Improvement Act of 2012, requires federal executive branch agencies to (1) review all programs and activities, (2) identify those that may be susceptible to significant improper payments, (3) estimate the annual amount of improper payments for those programs and activities, (4) implement actions to reduce improper payments and set reduction targets, and (5) report on the results of addressing the foregoing requirements. In general, reported improper payment estimates include payments that should not have been made, were made in the incorrect amount, or were not supported by sufficient documentation. This testimony addresses (1) federal agencies' reported estimates of improper payments, (2) remaining challenges in meeting current requirements to estimate and report improper payments, and (3) strategies for reducing improper payments. This testimony is primarily based on GAO's fiscal year 2013 audit of the Financial Report of the United States Government and prior GAO reports related to improper payments issued over the past 3 years. The testimony also includes improper payment information recently presented in federal agencies' fiscal year 2013 financial reports. Federal agencies reported an estimated $105.8 billion in improper payments in fiscal year 2013, a decrease from the prior year revised estimate of $107.1 billion. The fiscal year 2013 estimate was attributable to 84 programs spread among 18 agencies. The specific programs included in the government-wide estimate may change from year to year. For example, with Office of Management and Budget (OMB) approval, an agency can obtain relief from estimating improper payments if the agency has reported improper payments under a certain threshold for at least 2 consecutive years. A net of 10 additional programs were added to the government-wide estimate by OMB in fiscal year 2013 when compared to fiscal year 2012. For fiscal year 2013, GAO identified the federal government's inability to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them as a material weakness in internal control. In addition, existing internal control weaknesses at the agency level continued to increase the risk of improper payments occurring. In fiscal year 2013, four agencies did not report estimates for four risk-susceptible programs, including the Department of Health and Human Services' (HHS) Temporary Assistance for Needy Families (TANF) program. HHS cited a statutory barrier that prevents it from requiring states to estimate improper payments for TANF. Estimates reported for two programs were also not included in the government-wide total because their estimation methodologies were not approved by OMB. Further, inspectors general reported deficiencies related to compliance with criteria listed in the Improper Payments Elimination and Recovery Act of 2010 for fiscal year 2013, such as the use of estimation methodologies that were not statistically valid. As GAO has previously found, a number of strategies across government, some of which are currently under way, could help to reduce improper payments. For example Analysis of the root causes of improper payments can help agencies target effective corrective actions. Some agencies reported root causes of improper payments using three error categories required by OMB (documentation and administrative, authentication and medical necessity, and verification). However, because the three categories are general, more detailed analysis to understand the root causes could help agencies identify and implement more effective corrective actions. Designing and implementing strong preventive controls can help defend against improper payments, increasing public confidence and avoiding the difficult “pay and chase” aspects of recovering improper payments. Preventive controls involve activities such as up-front validation of eligibility through data sharing, predictive analytic tests, and training programs. Implementing effective detection techniques to quickly identify and recover improper payments after they have been made is also important to a successful reduction strategy. Detection activities include data mining and recovery audits. Another area for further exploration is the broader use of incentives to encourage and support states in efforts to implement effective preventive and detective controls in state-administered programs. |
Paid preparers aid taxpayers in the completion of their tax returns for a fee. They range from licensed professionals, such as attorneys, certified public accountants, and enrolled agents, to those lacking formal training who complete tax returns part-time. Paid preparers authorized to represent taxpayers in matters before IRS are called practitioners and include attorneys, certified public accountants, and enrolled agents. Preparers work for a variety of enterprises including accounting firms, large tax preparation services, and law firms. Some are self-employed. IRS estimates that in 1999 there were 1.2 million paid preparers, although the actual number is unknown because some paid preparers do not sign the returns they prepare. The percentage of returns with a paid preparer’s signature has been steadily increasing over the past 20 years, as shown in figure 1. Paid preparers provide a variety of tax-related services besides tax preparation, including tax and estate planning and services that help clients receive funds quickly, such as electronic filing and RALs. Based on projections from our national survey, most taxpayers who used a paid preparer believe they benefited from doing so and would use a paid preparer in the future. Taxpayer surveys and studies of returns suggest that some taxpayers are poorly served by their paid preparers, but they do not allow a very precise estimate of the extent of the problem. Based on projections from our national survey, most taxpayers who used a paid preparer believe they benefit from doing so. We estimate that 77 percent of the taxpayers who used a paid preparer in 2002 were very or generally confident that they did not pay more in taxes than was legally required, as shown in figure 2, and that 87 percent would use one again in the future. These data suggest that paid tax preparers are providing needed services to taxpayers. The results of our taxpayer survey must be interpreted carefully—it is based on taxpayer perceptions, and taxpayers may not understand the tax laws well enough to evaluate the performance of their paid preparers. For example, most of the taxpayers we talked to in-depth said they used a paid preparer because they found IRS tax forms and documents too complicated or they were confronting an unusually complicated tax situation. If taxpayers lack the technical expertise needed to identify preparer errors, their survey responses may underestimate the extent of problems caused by paid preparers. With that caveat in mind, taxpayers in our nationwide survey said that their preparers did sufficient probing or took other steps to ensure an accurate return. We estimate that about 91 percent of taxpayers believe their preparers had enough information about their personal circumstances to accurately prepare their tax returns. We also estimate that 88 percent of taxpayers using paid preparers were asked for supporting documentation. Most of the preparers we talked to said they ask their clients to provide documentation to support claimed income, deductions, and credits, such as W-2 forms from employers or 1099 forms from financial institutions, to ensure the accuracy and completeness of the information reported on returns. In addition, paid preparers are required by law to take certain steps when filling out returns for their clients, including signing the return and giving their clients copies of the completed returns. We estimate that the vast majority of taxpayers who used a paid preparer in 2002 were provided a signed copy of their return, as shown in figure 3. Taxpayers choose to use paid preparers for a variety of reasons. As already noted, many of the taxpayers we interviewed in-depth told us they used a paid preparer because they did not understand the tax laws. According to the National Taxpayer Advocate, many taxpayers rely upon the expertise of a paid preparer to complete their returns since they are faced with a complex set of tax laws and a multitude of requirements for deductions, exemptions, and credits. One taxpayer, for example, said she began using a paid preparer 9 years ago to help her with estate tax issues following the death of her father because she needed help from a tax professional in dealing with complicated estate tax issues. Other taxpayers said they lacked the time or patience to complete their returns on their own. For example, a mother of four who operates her own business part-time and is finishing her degree at night said she simply does not have the time to do her own taxes. Other taxpayers stated that they paid someone to prepare their taxes in hopes of obtaining a larger and/or quicker refund. Some of the paid preparers we spoke to agreed that educating taxpayers about the tax laws is an important component of their practice. For example, one preparer who works primarily with immigrants said he and his staff spend considerable time explaining to their clients that paying taxes is part of the civic responsibilities they assumed in immigrating to this country. Other preparers told us they often have to educate taxpayers on more complex concepts, such as computing the basis (the investment made in a property) to determine how much of a real estate sale would be taxable. Another preparer told us he found that a taxpayer had overpaid his taxes by more than $6,200 over a 3-year period because the taxpayer had overlooked earned income and child tax credits. Still another preparer told us how he helped a taxpayer receive a refund in excess of $19,000 when he found out that the taxpayer, who had moved twice in less than 2 years, had missed out on deductions for moving expenses due to job relocations. When paid preparers make mistakes or exhibit other problematic behavior, the consequences for taxpayers may be significant. Examples provided by low-income tax clinicrepresentatives and paid preparers include: A taxpayer who overpaid his taxes over a period of years by roughly $3,500 to $5,000. The taxpayer had received notices for several years from IRS stating that he may be eligible for the Earned Income Credit (EIC). Each year, he took the notices to his preparer, but the preparer took no action. One preparer told his elderly client to provide him with the checks to make her quarterly estimated payments. Although he claimed these payments on the client’s tax return, he never gave the checks to IRS—he kept them for himself. After receiving notices from IRS, the taxpayer visited the paid preparer who told her that IRS must have made a mistake. The preparer was sent to jail. Another preparer incorrectly advised a married couple with two children to each file separately as head of household so that they could claim two EICs. The couple ended up owing taxes, interest, and penalties. A paid preparer let a taxpayer file for the EIC for 2 years although the taxpayer lacked the appropriate documentation and was ineligible for the credit. The taxpayer received a tax refund he was not entitled to receive, resulting in a tax liability of $3,300. As with all anecdotal evidence, these examples are not necessarily representative of the kinds of problems taxpayers encounter when dealing with problematic paid preparers. Also, taxpayers may have contributed to these problems by either providing incomplete information to their preparers or being actively complicit in avoiding taxes that are legitimately owed. In addition to over- or underpaying their taxes, IRS officials and others told us that sometimes taxpayers are poorly served by paying for services that accelerate the receipt of refunds, including RALs. The primary benefit of RALs is that they allow clients to receive funds quickly, sometimes in just a few minutes, rather than the 10 days it typically takes taxpayers who file electronically to receive their tax refunds. The ability to quickly receive funds makes RALs appealing to low-income taxpayers who often want or need their refund quickly. In addition, as the National Taxpayer Advocate pointed out in the fiscal year 2002 Annual Report to Congress, many low- income clients who lack bank accounts find that RALs are the only way to electronically file a return and receive their refunds quickly. For these and other reasons, RALs are becoming more popular. Based on IRS data, the National Consumer Law Center estimates that 12.1 million RALs were taken out in 2001, up from 10.8 million in 2000. Although this suggests that many taxpayers find value in using RALs, IRS officials and others have raised concerns about whether taxpayers are fully aware of the costs involved and their tax filing alternatives. For example, a recent New York City investigation found that some paid preparers fail to disclose the costs of RALs and the availability of alternatives to the loans.The investigation found that only 27 of the 43 preparers visited mentioned the annual percentage rate and other fees associated with RALs. New York City’s investigation also found that electronic filing was not strongly publicized as an alternative way for clients to receive their tax refunds quickly. According to a low-income tax clinic director, many paid preparers fail to fully explain to taxpayers that accepting a RAL carries a certain risk—if refunds are delayed or denied, taxpayers may be liable for additional charges and fees. Without clear information about the costs and risks, taxpayers cannot always weigh the costs against the benefits that they might receive. Also, based on information we gathered, fees for RALs and other services that accelerate the receipt of refunds vary widely. For example, while some preparers charge nothing for electronic filing services, one preparer we spoke to (while we were posing as a potential client) said he would charge us between $210 and $250 to file electronically. Another preparer said he would charge $174 for a RAL on a $700 refund, which equates to an annual interest rate of over 900 percent, assuming a loan period of 10 days, while another preparer quoted us a RAL fee of $130 on a $1,200 refund, which equates to an annual interest rate of about 400 percent, assuming the same loan period. These examples are not necessarily representative of all preparer fees; the exact amounts of preparer fees for accelerated refunds depend on various individual circumstances, such as the financial institution the preparer uses to finance the loan and the amount of refund due. The RAL fees, when combined with tax preparation fees, may considerably reduce a taxpayer’s refund. For example, the preparer mentioned above who quoted a RAL fee of $130 on a $1,200 refund also quoted a tax preparation fee of $190 in addition to the RAL fee. As shown in figure 4 below, the fees would have reduced the refund by more than 25 percent. In another example, a low-income tax clinic director informed us of a disabled taxpayer who was due a refund of $1,230 on a simple return. After paying various fees, such as return preparation and a RAL, she received a check from her preparer for $414—about 34 percent of her expected refund. A variety of evidence, including the above examples and our nationwide survey, shows that some taxpayers are poorly served by their paid preparers. While this evidence does not allow a precise estimate due to methodological limitations, none of it suggests that the percentage of poorly served taxpayers is large. However, even a small percentage of the more than 72 million taxpayers who used paid preparers in 2001 can translate into millions of affected taxpayers. Taxpayer surveys show that some taxpayers had problems with the quality of the service provided by their paid preparer. Based on the results of our nationwide survey, we estimate that 5 percent of paid preparer users had no confidence that they had not overpaid their taxes, and another 7 percent had little confidence, as shown in figure 2. We also estimate that 3 percent of paid preparer users did not believe that their preparer had enough information to accurately complete their return, as shown in figure 2. Our survey results are similar to a 1997 Consumer Reports nonrandom survey of 26,000 of its readers, in which 6 percent said they discovered an error made by their preparers. As discussed earlier, taxpayer survey results need to be interpreted carefully because they reflect taxpayer perceptions and may misstate the extent of the problem. Studies of filed returns also suggest that some paid preparers do not exercise due diligence in filing returns. For example, we have already mentioned that last year we estimated that as many as 2 million taxpayers overpaid their 1998 taxes by $945 million because they claimed the standard deduction when it would have been more beneficial to itemize, and half of these taxpayers used a paid preparer. Similarly, a recent report by the Treasury Inspector General for Tax Administration estimated that there were approximately 230,000 returns filed by paid preparers where taxpayers appeared eligible for but did not claim the Additional Child Tax Credit. In addition, a 2002 IRS study of the EIC for tax year 1999 returns estimated that some taxpayers claimed about $11 billion more than they were entitled to while others claimed $710 million less than they were entitled to.The IRS reported that paid preparers filed more than 65 percent of all EIC returns. None of these studies tried to determine how many errors were the fault of the preparer and how many were the fault of the taxpayer. However, based on our earlier examples of paid preparer performance, it seems likely that preparers bear responsibility for at least some of the over- or underpayments. Taxpayers could be at fault if they provide the preparer with incorrect information. Several IRS offices have responsibility for problem paid preparers, but balancing resources devoted to taxpayer protection with resources devoted to other priorities is a challenge. Proposals have been made for expanding IRS’s oversight of the paid preparer industry. Consideration of such proposals is complicated by a lack of data on the extent of the problem and the effectiveness of IRS’s actions and by the involvement of other agencies, state, and local governments as well as professional organizations. The newly formed OPR enforces professional standards for those paid preparers authorized to represent taxpayers in matters before IRS. These authorized preparers, called practitioners, include attorneys, certified public accountants, and enrolled agents. Treasury Department Circular No. 230 imposes standards of professionalism and conduct for practitioners and authorizes IRS to institute proceedings against practitioners who violate the regulations.Depending on the seriousness of the violation, OPR can sanction practitioners through private reprimand, censure (a public reprimand), suspension, or disbarment. For example: As a result of an OPR investigation, OPR accepted a practioner’s offer of consent to suspension for almost 3 years for violation of the requirement of due diligence as to accuracy in preparing corporate tax returns for 3 years. The practitioner underreported income by over $50,000 in 1 year, and claimed unsubstantiated expenses of over $25,000 in the other 2 years. The practitioner also overstated a real estate tax deduction by over $30,000 in 1 year. In another case, a practitioner was disbarred from practice for giving false or misleading information to IRS. The practitioner signed a power of attorney as being licensed when the license had not been renewed, thereby making the practitioner ineligible to practice before IRS. As part of IRS’s modernization effort, IRS hired an outside management consulting firm to make high-level recommendations concerning the staffing, organization, technology, and operating procedures of the Office of Director of Practice (ODP), the office OPR replaced. Table 1 summarizes the consultant’s findings. According to the OPR Director, IRS took the high-level findings of the consultant’s report and drew on its management and staff’s expertise to develop a plan to make needed improvements. For example, IRS reorganized the office, renaming it OPR, and has started to implement several other changes. As an initial step, OPR contacted various tax professional organizations in January 2003 and laid out the following priorities for the balance of 2003: enhance the visibility of OPR internally as well as externally, increase the capacity and capability of OPR, process the workload in a shorter time frame, ensure that Circular 230 sanctions are applied fairly and consistently, identify and implement organizational performance measures, and establish and maintain an effective working alliance with the tax professional organization community. While IRS has already made some improvements, according to the OPR Director, the following efforts are on-going: hiring and training a significantly expanded staff of attorneys and improving and documenting operational practices and procedures; communicating the OPR mission and progress internally and externally through speaking engagements, newsletters, and Web sites; working with IRS Chief Counsel and Treasury Department Tax Policy personnel to make beneficial amendments to Circular 230; and maintaining an open door policy with respect to the practitioner community in order to learn of their concerns and their suggestions. Also, the OPR director said it is going to take some time to make all the needed changes. We did not try to assess OPR’s on-going improvements because some are not yet complete and others are too new to have produced the desired improvements. IRS’s SB/SE division has responsibility for assessing and collecting monetary penalties against any paid preparers who do not comply with tax laws when filing returns. SB/SE assessed about $2.4 million in penalties in calendar years 2001 and 2002, and collected about $291,000 or 12 percent, including all or some portion of penalties from 44 percent of the preparers penalized. According to IRS officials, collecting paid preparer penalties has not been a priority in the division’s overall collection efforts due to other higher priority work, such as abusive tax schemes. According to an SB/SE representative, there are currently no plans for SB/SE to make collecting paid preparer penalties a priority. The representative stated that their priorities include abusive tax schemes, and they cannot afford to make these low dollar paid preparer cases a priority given their responsibility for addressing billions of dollars in uncollected taxes. Also, IRS does not currently have a system in place to identify paid preparer penalties separately from other assessments once a case is assigned for collection, and to do so would require a labor-intensive computer programming effort. However, the monetary amounts of these penalties, which are small relative to IRS’s other compliance efforts, may not reflect how important the penalties are as a deterrent to problematic paid preparers. According to the Internal Revenue Manual, penalty assertion is the key enforcement vehicle for noncompliant preparers. As mentioned earlier, IRS has no data on the extent of the problems with paid preparers or how effective its enforcement efforts are in deterring problematic preparer behavior. In assessing but not collecting these penalties, IRS may be sending preparers a mixed message about whether poor performance by preparers will be tolerated. For example, several paid preparers and low-income tax clinic officials we interviewed said that IRS was not providing enough paid preparer oversight and that it should be increased. IRS has made changes to its fiscal year 2003 compliance program guidance to place a higher priority on assessing penalties against problem preparers. However, collecting paid preparer penalties will continue to be part of the regular collection process because they are not to be given any special treatment as a priority. IRS has broad authority to monitor and sanction Electronic Return Originators (ERO) whom IRS authorizes to file tax returns electronically. IRS’s monitoring is to ensure ERO compliance with provisions of any revenue procedures, publications, or notices that govern IRS’s e-file program, including RALs. Through random and mandatory visits, the ERO monitoring program offices monitor the activities of EROs to ensure compliance with IRS’s e-file program and to investigate allegations and complaints against EROs. In 2001, IRS established a goal of visiting 1 percent of all EROs each year. IRS met its goal in 2002, visiting more than 1,400 EROs and sanctioning 215 of them for violating IRS guidelines. Figure 5 shows the number of EROs visited and sanctions issued by degree of seriousness, for fiscal year 2002, and for two thirds of fiscal year 2003, based on the most recent data available through May 2003. However, while IRS does impose some requirements on paid preparers offering RALs, its role is limited and the requirements serve in part to ensure that RALs are presented to taxpayers as loans and not as an accelerated tax refund. For example, IRS’s Publication 1345 prohibits EROs from basing their fees on a percentage of the refund amount or computing their fees using any figure from tax returns. IRS’s CI division investigates paid preparers suspected of criminal or fraudulent behavior and other related financial crimes. However, according to CI officials, they have a system using indicators developed from prior cases to identify and work only the most egregious cases due to overall resource limitations, leaving some cases unworked. Nevertheless, according to IRS, CI is increasing its investigations of criminal and fraudulent paid preparers. For example, according to IRS it more than doubled the number of paid preparer criminal investigations in 2002 compared to 2001 and experienced a significant increase in the number of investigations referred for prosecution in the first quarter of fiscal year 2003. CI officials told us that to prioritize its work, CI identifies and investigates the most egregious criminal behavior using a fraud ranking system that determines which preparers should be investigated. Officials said the ranking is based on information developed from individual returns provided by fraud detection centers. Fraud detection centers are CI offices collocated at IRS campuses that attempt to detect fraud by scanning paper and electronic returns. The system ranks preparers by the number of suspected fraudulent filed returns by applying criteria that have proven in the past to be successful in prosecution of fraud cases. However, as mentioned earlier, IRS has no data on the extent of the problem with paid preparers, including those who are fraudulent, or the effectiveness of CI’s deterrent actions against them. Two programs provide much of the organizational framework for CI’s actions against criminal paid preparer behavior. The division’s Return Preparer Program identifies and investigates criminal paid preparers while the Questionable Refund Program identifies fraudulent tax returns. Once identified, the program stops payment on fraudulent tax refunds and refers fraudulent tax schemes to CI field offices for further investigation. Figure 6 shows that in 2001 and 2002, CI evaluated 574 referrals of possible criminal paid preparer behavior and initiated 395 criminal investigations against paid preparers. According to CI, criminal paid preparer behavior varies. Some criminal preparers create false forms such as W-2s and file returns on behalf of deceased taxpayers. Others buy social security numbers and the names of dependents from taxpayers with multiple children in order to allow others to claim dependent related tax credits, such as the EIC. According to CI officials, most criminal preparers are investigated for aiding and abetting a false tax return. For example, during 2001 to 2002, more than 91 percent of CI’s initiated investigations against paid preparers involved preparers who helped prepare false or fraudulent tax returns. One investigation resulted in a preparer pleading guilty for assisting in the preparation of false tax returns and sentenced to 38 months in prison and assessed a $10,000 fine. The preparer owned and operated a tax preparation business and among her criminal activities regularly advised clients to claim fraudulent tax credits for dependents and child care, even though the clients had no dependents. The preparer’s actions from 1997 to mid-2000 resulted in a loss to the Treasury of between $1.5 and $2.5 million. From 2001 to 2002, CI investigations resulted in the indictment and sentencing of 134 paid preparers, of which 119 were incarcerated. Anecdotally, several preparers we spoke to stated that publishing examples of convictions against preparers may help deter future criminal preparer behavior. However, IRS does not have quantitative information about the size of the problem with paid preparers or the extent to which convictions against paid preparers are a deterrent to other preparers. Information on deterrence would be difficult, perhaps impossible to develop. While IRS provides some oversight of paid preparers, others believe that it should provide additional oversight. The Low Income Taxpayer Protection Act of 2003, S. 685 proposed in the 108th Congress, would require the licensing and registration of paid preparers and RAL providers. The proposal would also require all preparers to abide by the rules of conduct that currently govern practitioners and contains provisions aimed at discouraging the use of RALs, including regulating the fees charged for RALs. The National Taxpayer Advocate recommended a similar proposal requiring the registration of paid preparers in her 2002 Annual Report to Congress. The proposal would require paid preparers to be registered with IRS, pass a certification examination, and maintain annual educational requirements. In a previous report to the Congress, the National Taxpayer Advocate stated that while paid preparers are subject to monetary penalties if they prepare returns negligently, many preparers are not subject to standards of conduct, licensed by any state regulatory agency, or required to participate in continuing education programs. Thus, according to the Advocate, the only course of action that can be taken to enjoin a paid preparer is the initiation of a civil action by the Secretary of the Treasury against the preparer in A District Court of the United States. According to the Advocate, such action is costly, time consuming, and leaves questionable income tax preparers free to remain in business and potentially harm taxpayers if they continue to prepare income tax returns during the legal process of the civil action. Some of the paid preparers and officials from low-income tax clinics and professional organizations we interviewed said that IRS could provide additional oversight of paid preparers, although several said that it would be difficult for IRS to undertake such efforts. Several of the preparers we interviewed said that IRS’s current oversight of paid preparers needed improvement and most of the paid preparers, low-income tax clinics, and professional organizations we interviewed told us they supported the licensing or registration of paid preparers as a way to provide additional oversight of paid preparers. For example, one preparer said he felt paid preparer oversight was not in IRS’s order of priorities and that paid preparers should be licensed so that IRS could enforce education and conduct standards. Others told us that IRS should impose a registration or licensing requirement on paid preparers although some expressed reservations. For example, a representative from the National Society of Accountants said that it would be an arduous task for IRS to create a system to license hundreds of thousands of people and then set up the mechanisms to discipline them. Officials from a low-income tax clinic also expressed concerns, saying that such a proposal may increase the cost of tax preparation by reducing the supply of available preparers. Any consideration of whether to change IRS’s responsibilities for overseeing paid preparers would likely take into account several factors. One, obviously, is the benefits and costs to taxpayers who use paid preparers. However, as highlighted in this report, data are lacking about the extent of problematic paid preparer behavior and the effectiveness of existing IRS actions, which makes it difficult to assess the tradeoff between benefits and costs. Another factor is that regulating the paid preparer industry, a private sector industry, is a form of consumer protection. IRS’s major functions, which include processing tax returns, responding to taxpayer questions, and enforcing compliance with the tax laws, give it little experience in providing consumer protection. Still another factor is the implication for IRS resources. Recently we have reported on declines in IRS’s enforcement programs, including declines in resources allocated to those programs. We have also reported that needs in other IRS programs have often been met at the expense of resources devoted to enforcement. Any consideration of whether to increase IRS paid preparer oversight or consumer protection must also recognize that IRS is not alone in providing such oversight. Other federal agencies, such as the Federal Trade Commission (FTC), state and local governments, and professional organizations engage in efforts to prevent, detect, and take action against problem paid preparers. For example, FTC has taken action against paid preparers pursuant to its authority to enforce the provisions of the Federal Trade Commission Act. FTC’s primary mission is to protect consumers by enforcing federal consumer protection laws that prevent fraud, deception, and unfair business practices. This protection extends to taxpayers using paid preparers for tax preparation and other related services. In addition, at least six states and one city have laws that provide paid preparer oversight or consumer protection regarding RALs. These laws range from requiring registering or licensing of paid preparers to requiring disclosure statements for RALs. For example, the City of New York requires a separate disclosure statement for RAL agreements that must be provided in English or Spanish. New York City’s law also requires paid preparers to provide an oral explanation of the law’s required written disclosure in language understood by the taxpayer. In addition to government entities, professional organizations, such as the American Institute of Certified Public Accountants and the American Bar Association, also impose general standards of conduct on the actions of their members, including those representing taxpayers before the IRS and preparing tax returns. We did not attempt to identify all federal, state, and local governments or professional organizations that have a paid preparer or RAL oversight role in addition to IRS. Table 3 shows examples of some tax preparation and RAL oversight in addition to that provided by IRS. Three of these seven oversight efforts shown in the table above were passed or enacted within the past year. To date, none of the state or local governments responsible for the efforts has evaluated the effectiveness of these efforts. The absence of such data further complicates any consideration about changing IRS’s role. Without data, IRS management cannot determine how much these other government entities will provide paid preparer oversight or consumer protection. Paid tax preparers are critical to the functioning of our tax system. Many taxpayers do not understand their filing requirements and would have great difficulty filling out their tax forms without the assistance of paid preparers. While most taxpayers may receive quality services from their preparers, problematic behavior by some preparers raises the question of whether IRS should be more active in overseeing paid preparers. Since paid tax preparation is a private sector industry, this can be viewed as a question about the extent to which the nation’s tax administrator ought to be involved in consumer protection. On the one hand, the complexity of the tax code is at least partly responsible for the existence of the paid tax preparation industry. As a consequence, IRS might be viewed as properly having some responsibility for oversight of the industry. On the other hand, IRS’s mission is tax administration and the agency may not have the expertise or the regulatory culture for successfully carrying out consumer protection responsibilities. In addition, unless given a budget increase IRS would have to divert resources from other priorities in order to carry out expanded industry oversight responsibilities. In recent years IRS has often met such resource needs by decreasing staffing of its enforcement activities. At least two proposals exist for legislative action, one from the Taxpayer Advocate and the other, the Low Income Taxpayer Protection Act of 2003, S. 685, proposed in the108th Congress. Unfortunately, there is not much reliable information about the tradeoffs associated with changing IRS’s role. Examples of problematic preparer behavior are easy to find but reliable estimates of the number of taxpayers affected by the problems do not exist and would be difficult, perhaps impossible, to develop. Such data would be needed to properly evaluate proposals for changing IRS’s role. While the federal government and some state and local governments have taken actions intended to address problematic preparer behavior, the effectiveness of the actions is not known. Because making decisions about IRS’s role is a policy matter and because data are not available to determine the efficacy of IRS’s current oversight efforts, whether to expand IRS’s role in ensuring taxpayers receive quality service from paid preparers is a judgment that Congress and IRS management must make. We are not making recommendations in this report. The Commissioner of Internal Revenue provided written comments on a draft of this report in an October 28, 2003, letter, which is reprinted in appendix III. The Commissioner agreed with the information presented in our report and noted that IRS will continue its efforts to provide oversight of paid tax preparers and is developing new initiatives to ensure the ethical responsibility of preparers. These efforts include continuing to develop the Office of Professional Responsibility, considering changes to Circular 230, coordinating with professional tax associations, increasing compliance efforts, forming a multifunctional work group to improve communications within IRS, and developing a national paid preparer strategy. The Commissioner said that, based on the information in our report, IRS will undertake an analysis of whether IRS can take additional steps to increase the impact of its efforts to assess penalties against paid tax preparers. In response to our observation that penalties assessed against paid preparers are not a collection priority, the Commissioner noted, and we agree that preparer penalty cases are included in IRS’s collection priority system. Our point is that they are not a collection priority because of their relatively low dollar value and we noted that IRS collected only 12 percent of the penalties assessed in calendar years 2001 and 2002. The Commissioner commented that it might be a better reflection of IRS’s collection efforts to point out that during this period, the agency collected all or some portion of penalties from 44 percent of the SB/SE preparers who were assessed a penalty and we changed our draft to show the percentage collected. We were aware that some paid preparers voluntarily pay the penalties assessed against them but, as indicated by the Commissioners’ response, more than half of paid preparers paid nothing. Since uncollected preparer penalties represent about 88 percent of the value of penalty assessments, we said that IRS may be sending the paid preparer community a mixed message about whether poor performance by preparers will be tolerated. At the same time, we recognize that collecting paid preparer penalties has not been a priority due to other higher priority work, such as abusive tax schemes. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Jonda Van Pelt, Assistant Director. Other major contributors are acknowledged in appendix IV. If you have any questions about this report, contact me on (202) 512-9110. The objectives of this report were to (1) obtain the views of taxpayers who used paid preparers and provide examples of paid preparer performance, including what is known about the extent of problems caused by paid preparers and (2) describe IRS’s efforts to prevent, detect, and take action against problem paid preparers; challenges facing IRS offices that interact with paid preparers, especially the Office of Professional Responsibility; and efforts to address those challenges. To obtain the views of taxpayers who used paid preparers about the quality of service the preparers provided, we conducted (1) a representative nationwide survey and (2) in-depth interviews with a small judgmental sample of the individuals who participated in our nationwide survey. We also searched for studies that talked about the extent of problems caused by paid preparers. To determine taxpayer views of their paid preparers, we contracted with the Marist College Institute for Public Opinion of Poughkeepsie, New York to include our questions at the beginning of their multisubject telephone survey of the United States conducted between February 5 and 24, 2003. Interviews were completed with 917 of the estimated 1,996 eligible sampled individuals for a response rate of 46 percent. The results presented in our report are based on the 429 interviews with respondents who reported they paid someone to prepare their federal personal tax returns for their 2001 income. We sought to obtain information about the views of the adult population of the United States. The study procedures yield a sample of members of the noninstitutional population of the United States (50 states and the District of Columbia) who are 18 years or older, speak English, and reside in a household with a land-based telephone (cellular telephone numbers were not included in the sample). Random Digit Dial Equal Probability Selection Methods were followed to identify households. Survey Sampling International (SSI) of Fairfield, Connecticut provided the probability sample of telephone numbers. These were drawn from active telephone blocks of telephone exchanges with listed numbers and excluded numbers that SSI identified as being business numbers or not in service (e.g., disconnected). At least eight calls were made to each telephone number to attempt to identify a respondent. A member within each household was initially randomly chosen by selecting the individual whose birthday most recently preceded the date of the telephone contact. Once the selection of a household member was made, two attempts were made to complete the interview with that individual. If, after two contacts, including scheduled appointments, the selected member could not be reached or refused to complete the survey, a second adult member of the household was asked to participate. If a household refused twice, it was not contacted until the final week of data collection at which time a monetary incentive was offered for completion of the interview. Survey respondents are weighted in our analyses so that age, gender, and regional estimates from our survey will match U.S. data on these demographic characteristics. The U.S. data come from county-level estimates from Census 2000 that were projected forward by SCAN/U.S., Inc. to July 1, 2002. As with all sample surveys, this survey is subject to both sampling and nonsampling errors. The effects of sampling errors, due to the selection of a sample from a larger population, can be expressed as confidence intervals based on statistical theory. The effects of nonsampling errors, such as nonresponse and errors in measurement, may be of greater or lesser significance but cannot be quantified on the basis of the available data. Sampling errors arise because we used a sample of individuals to draw conclusions about the much larger population. The study’s sample of telephone numbers is based on a probability selection procedure. As a result, the sample was only one of a large number of samples that might have been drawn from the total telephone exchanges from throughout the country. If a different sample had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For all the percentages presented in this report, we are 95 percent confident that when only sampling errors are considered the results we obtained are within +/- 5 percentage points or less of what we would have obtained if we had surveyed the entire study population. In addition to the reported sampling errors, the practical difficulties of conducting any survey introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, some types of people may be more likely to be excluded from the study, errors could be made in recording the questionnaire responses into the computer-assisted telephone interview software, and the respondents’ answers may differ from those who did not respond. To test the understanding of the questions, we pretested the survey by conducting 57 interviews. To ensure that responses were correctly recorded in the computer-assisted telephone interview software, trained interviewers were used who had been specifically briefed on the study, and interviewer supervisors regularly monitored, evaluated, and provided feedback to the interviewing staff who worked from a centralized telephone facility. For this survey, the 46 percent response rate is a potential source of nonsampling error; we do not know if the respondents’ answers are different from the 54 percent who did not respond. Both GAO and Marist took steps to maximize the response rate—the questionnaire was carefully designed, at least eight telephone calls were made at different times of day on different days of the week to try to contact each telephone number, the interview period extended over 20 days, respondents were informed that their responses were anonymous, suspended interviews and refusals were recontacted at least once, and respondents were provided with a toll-free number to either call back at a more convenient time or to obtain further information about the survey. Because we did not have information on those taxpayers who chose not to participate in our survey, we could not estimate the impact of the nonresponse on our results. Our findings will be biased to the extent that the people at the 54 percent of the telephone numbers that did not yield an interview have different experiences with paid tax preparers than did the 46 percent of our sample who responded. Knowing that the survey would concern tax issues could not have created large biases because only about 1.6 percent of the eligible households in the sample (31 individuals) refused after the interview began (i.e., after they could have known the interview would address tax issues.) The remaining nonresponding units (about 52 percent of the sample) did not know that the interview would address tax issues. The 52 percent is comprised of about 18 percent (356) who refused before the interview could be started, about 14 percent (274) where an eligible respondent was identified in the household, and about 21 percent (estimated 418) where no one was contacted at the telephone number but the household was assumed to be eligible. This estimate of 418 uncontacted, but eligible, households is derived assuming that the percentage of eligible households among all our 704 uncontacted households would be the same (59.14 percent) as the percentage of eligible households among households for which the eligibility status was determined. To obtain examples of paid preparer performance, we conducted in-depth interviews with 18 taxpayers from our nationwide survey of taxpayers. In addition, we discussed paid preparer performance and received examples of paid preparer performance from various IRS offices, some paid preparers, some low-income tax clinics, and IRS’s Taxpayer Advocate Service. To obtain information on the fees charged by paid preparers for electronic filing and refund anticipation loans we contacted seven preparers posing as potential clients and also gathered loan cost schedules from the Web sites of two lenders. We also reviewed closed case files in IRS offices, including the Office of Professional Responsibility (OPR), Small Business/Self-Employed (SB/SE) division, and Criminal Investigation (CI) division. A copy of the survey is in appendix II. As part of our nationwide survey of taxpayers, we asked the individuals we contacted if they would be willing to participate in an in-depth interview regarding their experiences with paid tax preparers. For those taxpayers who agreed, we used a structured questionnaire that covered, for example, how taxpayers found their paid preparers and investigated the credentials of the preparer, the type of preparer used, why the taxpayer used a paid preparer, and how extensively the preparer probed the taxpayers’ personal tax circumstances and asked for documentation. We interviewed 18 taxpayers in-depth. To obtain studies discussing the extent of problems caused by paid preparers, we relied upon studies mentioned in interviews with IRS officials and through periodical searches. We also used a 1997 Consumer Reports survey of their readership concerning paid preparers, a report by the Treasury Inspector General for Tax Administration regarding potentially unclaimed child tax credits, a Department of Treasury study regarding earned income tax credits, and a previous GAO report that estimated the number of taxpayers eligible to itemized deductions who used the standard deduction instead. To describe IRS’s efforts to prevent, detect, and take action against problem paid preparers, we interviewed officials from IRS offices including OPR, SB/SE, CI, and the Taxpayer Advocate Service (TAS). IRS officials said these offices interact the most with preparers. We also reviewed various documents used by these offices to provide paid preparer oversight. To describe challenges facing IRS offices that interact with paid preparers, especially OPR, and efforts to address those challenges, we interviewed officials from OPR, including its new Director, as well as officials from other IRS offices discussed above, such as SB/SE and CI. We also used documents from OPR, including a consulting firm report on the office of Director of Practice and documents from other IRS offices. To examine IRS’s efforts to assess and collect penalties against paid preparers, we interviewed officials from IRS’s SB/SE division, reviewed collection data, and examined division documents. To determine the percentage of assessed fines collected and uncollected by SB/SE we relied upon a SB/SE analysis of collections data extracted from IRS’s Enforcement Revenue Information System. To assess the reliability of these data, we reviewed existing documentation related to the data sources and interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To obtain information about IRS’s efforts to register and monitor Electronic Return Originators (ERO), we interviewed officials from SB/SE’s ERO Monitoring Program and reviewed IRS Publication 1345 covering requirements for EROs. To determine the number of EROs, monitoring visits, and sanctions issued, we relied upon IRS’s e-file Provider Monitoring Report. In addition, we reviewed various other documents including a recent report by the Treasury Inspector General for Tax Administration. To describe IRS’s efforts to investigate criminal and fraudulent paid preparer behavior, we interviewed officials from CI and reviewed case file information. We used data from the CI Management Information System and interviewed CI officials to determine statistics on the cases worked. To assess the reliability of these data, we reviewed existing documentation related to the data sources and interviewed officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To examine efforts suggested by IRS’s Taxpayer Advocate Service to provide additional IRS oversight of paid preparers or provide more consumer protection, we interviewed officials from the Advocate’s office about a proposal to license paid preparers. We also reviewed the 2001 and 2002 National Taxpayer Advocate’s reports to Congress where the Advocate’s proposals are explained and discussed. To provide examples of actions taken against problem paid preparers by other federal, state, and local governments, we relied upon interviews and reports from a variety of sources including paid preparers, professional and consumer organizations, officials from several states, and some federal agency representatives. Based on these interviews and reports, we examined state and local laws that create oversight of certain aspects of paid preparer behavior. We did not attempt to identify all federal, state, and local governments or professional organizations that have a paid preparer or RAL oversight role. Those discussed are only examples of what we found during our research and there may be others. The data cited from IRS for the estimated number of individual filers in 2001 that paid someone to prepare their tax returns, the amount paid in 2000 for tax preparation, the number of paid preparers in 1999, and the number of RALs taken out in 2001 and 2000 are considered background information. As such, we did not verify these numbers. We conducted our work from April to October 2003 in accordance with generally accepted government auditing standards. “We have a few questions about your experiences last year in completing your federal income tax return. We are interested in whether or not you paid someone last year to fill out your 2001 income tax return.” “Did you pay someone to prepare your tax return last year?” __47__ YES (Continue with Question 2) __51__ NO (Stop) “For the rest of the questions, we’ll refer to this person as the paid preparer. Was the paid preparer who filled out your tax return: A) A tax preparation service such as H & R Block or Jackson-Hewitt, B) An accountant, CPA or lawyer, C) Someone else, or D) do you not know?” We are 95 percent confident that the percentage estimates of our survey are within +/- 5 percentage points or less of what we would have obtained if we had surveyed the entire study population. * Percents do not add to 100 due to rounding. “Next, we ask about some of the practices that paid preparers sometimes follow. For each one, please tell me whether you know if it is something your paid preparer did do or did not do or whether you do not know.” “First, did your paid preparer give you a copy of your completed tax return, not give you a copy or do you not know?” __95__ YES, GAVE COPY ___1__ NO, DIDN’T GIVE COPY (skip to 4c) ___4_ DON’T KNOW (skip to 4c) “Did your paid preparer sign your copy of your completed tax return as the preparer, not sign your copy, or do you not know?” “Did your paid preparer see any documents that showed the income you received or any deductions or tax credits that you might have claimed? That is, did the paid preparer see the documents, not see them, or do you not know?” “For the next question, I want you to think about everything about you that affects the amount of taxes you pay, such as whether or not you have children at home, earn interest from a bank account, or pay a mortgage. Do you believe that your paid preparer had enough information about your situation to accurately prepare your income tax return, didn’t have enough information, or don’t you know?” ____5_ NOT AT ALL CONFIDENT * Percents do not add to 100 due to rounding. “Has the IRS sent you any type of notice saying that any part of your tax return from last year had to be changed, or has the IRS not contacted you, or do you not know whether you have been contacted?” “Now think about your new 2002 tax return that is due soon. Do you think you will use a paid preparer again or not use a paid preparer for this new tax return?” __87_ YES, USE A PREPARER AGAIN ___7_ NO, NOT USE A PREPARER (stop) ___6_ DON’T KNOW (stop) “The U. S. General Accounting Office is doing research on peoples’ opinions and experiences with their paid preparers. Would it be okay with you if someone from the General Accounting Office telephoned you in the next month for a research interview?” __45_ YES (NOT REWEIGHTED TO U.S. __54_ NO (stop) POPULATION) In addition to those named above, Vince Balloon, Larry Dandridge, Katherine Davis, Michele Fejfar, Evan Gilman, Tre Forlano, Brittni Milam, Libby Mixon, Cheryl Peterson, and Peter Rumble made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Over 55 percent of the nearly 130 million taxpayers in tax year 2001 used a paid tax preparer. However, using a preparer may not assure that taxpayers pay the least amount due. Last year, GAO estimated that as many as 2 million taxpayers overpaid their 1998 taxes by $945 million because they failed to itemize deductions and half of these used preparers. GAO was asked to (1) obtain the views of taxpayers about paid preparers and examples of preparer performance including any problems and (2) describe the Internal Revenue Service's (IRS's) oversight of problem preparers; the challenges facing IRS in dealing with problem preparers, especially the Office of Professional Responsibility; and the efforts to address those challenges. To obtain the views of taxpayers who used preparers, GAO surveyed a national representative sample of taxpayers. GAO estimates that most of the taxpayers who used a paid preparer believe they benefited from doing so. Many taxpayers told us they believed they would have great difficulty filling out their own tax forms because they do not understand their filing requirements. At the same time, some taxpayers are poorly served when paid preparers make mistakes, causing taxpayers to over-or underpay their taxes or pay for services, such as short-term loans called Refund Anticipation Loans (RALs), without understanding their costs and benefits. The evidence available does not allow a precise estimate of the extent of problems caused by paid preparers, but nothing suggests that the percentage of taxpayers affected is large. Nevertheless, even a small percentage of the over 72 million taxpayers who used paid preparers in 2001 translates into millions of taxpayers who are potentially adversely affected. IRS has several offices responsible for taking action against problem paid preparers, including the newly formed Office of Professional Responsibility. These offices sanction preparers for violating standards of conduct; assess monetary penalties for violating tax laws when preparing returns; monitor and, if justified, sanction problem preparers offering electronic filing and RALs; and investigate fraudulent preparer behavior. However, balancing resources devoted to such efforts against those devoted to other IRS priorities is a challenge. In addition to IRS, other federal agencies, state and local governments, and professional organizations have a role in regulating paid preparers. At least two proposals exist to expand IRS's oversight of paid preparers. Consideration of such proposals is complicated by the difficulty of developing reliable estimates of the number of taxpayers affected by problem preparers or the effectiveness of the actions taken against them. |
More than 14,000 different types of commodities are imported into the United States, involving more than 15 million separate shipments or transactions each year. In 1992 and 1993, U.S. imports were valued at $532.7 billion and $580.5 billion, respectively. Customs has the primary responsibility for processing imports to ensure that they do not violate U.S. laws and regulations. Also, Customs is responsible for ensuring that duties and fees are paid and, with more than $21.6 billion collected in fiscal year 1993, is second only to the Internal Revenue Service in its revenue-producing function. Customs also accumulates basic information on imports in its ACS database for oversight and statistical purposes. For about 94 percent of the ACS entries, importers or licensed brokers—referred to as “filers”—electronically enter data directly into ACS and generally follow this with a manually prepared entry summary. For the remaining 6 percent of the ACS entries, the filers elect not to file the entry electronically, and Customs must enter the information into ACS from the manually prepared entry summaries. Periodically, Census extracts data from ACS for use in developing and publishing trade statistics. The Census data are available in two forms. The first and most comprehensive is the Import Detailed Data Base, which contains information on individual transactions and is restricted to official use. The second consists of various reports and publications that summarize trade statistics and are made available to the public. In 1990, two professors at Florida International University (FIU), using the summary Census data, found wide variations in the unit values for seemingly identical commodities. For example, the professors found that the unit value for razors varied from $0.03 to $34.81 each. They also found that emeralds from Panama had an average unit value of $974.58 a carat, compared with $5.29 a carat for those from Brazil. Other commodities showed similar disparities. As the results of the FIU study became known, concerns were raised that the differences in unit value could be the result of criminal activities, such as money laundering. For example, a person in the United States could transfer money to another country simply by paying far too much for an imported product in an exchange that would otherwise appear legitimate. As discussed with the Subcommittee, we determined that statistical sampling of a database as large as ACS’ was impractical, given the time constraints of our work. Instead, we agreed to judgmentally select eight commodities for detailed examination. We selected these commodities from the Harmonized Tariff Schedule (HTS) of the United States, which classifies and describes all commodities subject to importation and lists the applicable duties, fees, and quotas for each commodity. We selected a broad variety of commodities that generally had narrow definitions and provided some overlap with previous studies by Customs and FIU. Three of the eight commodities were subject to quotas. To meet our first objective of determining how widely unit values for identical types of commodities varied, we used the Import Detailed Data Base for fiscal year 1992 to compute and analyze unit values and to develop statistical profiles for each of the eight commodities. To meet our second objective of determining why these variations occurred, we selected 10 transactions across a wide range of values under each of the 8 commodities. For each of these 80 transactions, we then examined supporting documentation, such as entry summaries, invoices, and shipping manifests, to verify that the commodity was appropriately classified and to recalculate the unit values that should have been reported. Our objectives, scope, and methodology are discussed in more detail in appendix I. Appendix II provides a summary comparison of the commodities we selected for analysis. Appendixes III through X show the results of these analyses by commodity, including (1) comparisons of high, low, average, and median unit values by U.S. port of entry, country of export, importer, and method of transport; (2) quantities shipped and unit values at each decile across the range of values; and (3) a comparison of the unit value we computed with those in the ACS database for the selected transactions. We obtained written comments on a draft of this report from Customs and Census. Their comments are evaluated at the end of this letter and are reprinted in appendixes XI and XII. We did our work between November 1993 and August 1994 in accordance with generally accepted government auditing standards. Just as the FIU study, we found wide variations in unit values for transactions within the same commodity classification. Table 1 shows the highest, lowest, and average unit values for each of the eight commodities. Appendixes III through X show the unit values for each commodity across percentile ranges and provide further comparisons by U.S. port of entry, country of origin, importer, and method of shipment. As seen from table 1, variations in unit value were the norm for the eight commodities we examined. Raw cane sugar had the most narrow unit value range and, even then, the highest value of $1.75 a kilogram was four times the lowest value of $0.43 a kilogram. At the other extreme, the high unit value of $3,809 per meter for wood dowel rods was 952,250 times the lowest unit value of $0.004 per meter. Some unit values appeared implausible. Such was the case with facsimile machines valued at $5.62 each, pantyhose for $1,267.50 a dozen pair, or hypodermic syringes as low as $0.01 and as high as $3,485 each. Also, 185 shipments of scrap gold, which accounted for 783,380 grams (or 4.3 percent of the total quantity), each had a unit value of more than $11.60 a gram—the price of pure gold at the time. Overall unit values for scrap gold ranged from $0.02 to $4,368 a gram, with an average unit value of $3.75 a gram. In examining the supporting documentation for individual transactions, we found two causes for variations in unit values. First, the commodity classifications used by Customs were so broad that a particular code could cover a wide assortment of products with natural variations in value. In practice, Customs can do little about the wide commodity definitions, since they are determined through a combination of law, international agreement, and agreements among various U.S. agencies, including Customs. Second, filers frequently made errors in entering the commodity code, quantity, or total value into ACS. While Customs could correct these errors if it knew of them, the current parameters used to detect unit value anomalies are so broad that they identify only those errors involving extremely high or low unit values. In coding commodities for entry, Customs requires filers to choose from the more than 14,000 codes specified by the HTS. The HTS is subdivided into sections, chapters, and specific commodity types. The codes range from 4 to 10 digits in specificity, depending on the degree to which a particular commodity is subdivided. For example, facsimile machines are at the 10-digit level (8517.82.00.40) under “electrical machinery and equipment” (Chapter 85), the 4-digit level (8517) under “electrical apparatus for line telephony or telegraphy,” and the 6-digit level (8517.82) under “telegraphic.” Even with the large number of specialized codes, commodities within a particular HTS classification can vary by type, quality, and intended use. As shown in the transaction analyses in table 8 of appendixes III through X, these variations in products lead to variations in unit values. For example, the facsimile machine classification described in appendix V covers everything from inexpensive and mass-produced, home-use models to machines that are highly specialized and designed to be used in complex and sophisticated communications systems. We analyzed the supporting documentation for the 10 facsimile machine transactions and found machines that were properly valued as low as $264.14 per unit and as high as $26,425 per unit. Similarly, the pantyhose classification discussed in appendix IV is broad enough to include such diverse products as pantyhose of differing grades and sizes, tights, and support hose. For the 10 pantyhose transactions, we analyzed the supporting documentation and found products that were properly valued from as low as $3.50 a dozen pair to as high as $156.59 a dozen pair. Two of the transactions, with unit values of $156.59 and $66.64 a dozen pair, were special orders intended for promotional uses. The scrap gold classification is broad because it covers gold waste and scrap, regardless of the weight, purity, or metals to which it is clad. For example, we examined the supporting documentation for one transaction where the commodity was described on the invoice as “scrap gold for refining” and was properly valued at $9.26 a gram. We examined the supporting documentation for another transaction and found the scrap gold was properly valued at $0.22 a gram and, according to the invoice, consisted of gold and brass “floor sweeps.” The U.S. International Trade Commission publishes the HTS, following guidelines set by law, international agreement, and agreements among U.S. agencies. As one of these agencies, Customs can only recommend changes in the level of specificity within individual HTS classifications. Customs officials said they would not necessarily make changes in the definitions even if they could do so. According to these officials, while narrower product definitions would reduce the range of unit values within a particular commodity code, the higher level of specificity also would increase the number of codes with which Customs and the filers would have to contend. Another reason unit values for imports varied so widely is that the Import Detailed Data Base contains errors. Such errors occur when the filer enters the wrong HTS code, quantity, or total value into ACS and the data are not corrected prior to being extracted by Census. We examined the supporting documentation for 80 transactions, and we found that 45 transactions contained one or more types of errors. For 14 of the 45 transactions with errors, the filer entered the wrong HTS code. Thus, while the unit value may have been computed properly, it was entered under the wrong commodity classification. The following are examples of valuation errors created by the filer having entered the wrong HTS code: Four of the 10 facsimile machine transactions were wrongly coded because the products shipped were not facsimile machines. Two of these transactions, with unit values of $492.84 and $5.62 each, actually were for spare parts. A third transaction, with a unit value of $29.23 each, was for a shipment of modems. The fourth transaction—and by far the largest single unit value we analyzed—was for a telegraph machine with a unit value of $147,292. Three of the 10 raw cane sugar transactions—accounting for 64.7 percent of the total volume shipped during 1992—were wrongly coded. Since the product did not meet the commodity definition of raw sugar, it should have been listed under another cane sugar category. Three shipments of unsweetened cocoa, with unit values of $234.43, $2.62, and $0.24 a kilogram, were wrongly coded. Even though the products contained cocoa, one shipment was a specialty concentrate and the other two shipments were cocoa cake. Each type of product has its own HTS classification. For 36 of the 45 transactions with errors, the filer entered either the wrong quantity, the wrong total value, or both the wrong quantity and total value into ACS. Five of these 36 transactions contained errors because the filer had also entered the wrong HTS. The following are examples of the types of quantity and value errors we found: On a shipment of hypodermic syringes, the filer showed the quantity as 600,000 when it should have been 60,000. Since the total value was properly shown as $135,000, the unit value was computed as $0.23 each when the correct unit value was $2.25 each. On a shipment of wood dowel rods, the quantity was incorrectly shown as 2 meters when it should have been 4,618 meters. This resulted in the computation of the unit value as $3,809 per meter when the correct value was $1.65 per meter. The opposite occurred on another shipment, when the quantity was shown as 2,709,190 meters instead of 225,765 meters. Thus, the unit value should have been $0.05 per meter instead of $0.004 per meter. A shipment of gold had a unit value of $4,368 a gram, or 379 times the going rate for pure gold at the time, because the filer had entered the wrong quantity. The supporting invoice showed the quantity as 2.05 kilograms and, apparently, the filer showed this as 2 grams in making the entry. The correct unit value of the scrap gold was $4.26 a gram, or less than half of the then market price of $11.54 a gram for pure gold. Eighteen shipments of unsweetened cocoa showed a unit value of $0.00 a kilogram because, in each case, no quantity was shown on the Import Detailed Data Base. We analyzed the supporting documentation on one of these shipments and found that the quantity should have been 8,164 kilograms. Since the total value was properly entered at $14,940, the unit value should have been $1.83 a kilogram. For the transactions we examined, the effect of the filer errors on revenues was minimal. However, the errors raise questions about the accuracy of trade statistics and Customs’ ability to use unit values as a screening mechanism in ACS to detect data errors or to identify problems, such as quota violations or improper payment of duties and fees. The filer errors we found had only a minimal effect on revenues. Of the 45 transactions we found with errors, we identified only 5 transactions where we could determine the duties or fees were wrong, with a net overcollection of $114.57. Each of these incorrect duties or fees was caused by a quantity or value error. We could not determine the effect on duties for two other transactions because the supporting documentation did not contain sufficient information to identify the HTS code that should have been entered. None of the classification errors resulted in a dollar loss because the duties and fees actually paid were equal to or greater than what should have been paid. Similarly, most of the remaining errors involved quantity, whereas duties and fees typically are tied to total value. Quantity errors could be a problem where quotas are concerned, and three of the commodities we selected—raw cane sugar, tire cord fabric, and pantyhose—were subject to quotas. Again, however, the errors we found did not raise concerns that quotas may have been exceeded significantly. In two cases, the quantities were overstated because of errors, so the quota was not exceeded. In the third case, the quantity understated was minimal, amounting to only 0.026 percent of the total quantity shipped for the year. Errors in the Import Detailed Data Base can affect trade statistics. When the filer enters the wrong quantity or value into ACS, the effect is limited to the HTS classification being examined. In those cases where the wrong HTS is entered, the quantity and value data will be in error for both the classification that was entered by mistake and the classification that should have been entered. Since we did not randomly sample commodities or transactions, we cannot project the overall effect of filer errors on trade statistics. However, raw cane sugar, one of the commodities we selected, had only 32 transactions for 1992. We analyzed 10 of the 32 transactions and found that 3 transactions were improperly coded. The three transactions accounted for 64.6 percent of the total quantity and 55.7 percent of the total value reported. The effect of these three classification errors was an overstatement of both quantity and total value in the raw cane sugar category. If not for these 3 errors, the total quantity would have been 931,237 instead of the reported 2,632,911 and the total value would have been $630,491 instead of $1,422,070. Presumably, the categories that should have been entered were understated by like amounts. As a means to detect potential errors in the trade data drawn from the Import Detailed Data Base, Census developed a series of screening parameters that provide a warning that the information entered is outside of the norm. Two types of warnings involve unit value—one warning if it is too high and one warning if it is too low. In effect, the warnings provide a range within which the unit value should fall for a particular commodity code. Table 2 shows the Census unit value ranges for each of the eight commodities we selected for analysis. The Census ranges are integrated into Customs’ ACS, which is to use them to screen each automated entry for unit value anomalies. When the unit value of a particular entry falls above or below the Census range, ACS is to first warn the filer, who then can review the data entered and make corrections if necessary. If the numbers are accurate, but outside the range, Customs is to require the filer to provide supporting documentation with the paper entry summary that follows the electronic submission. ACS is also to alert Customs officials that the entry is outside of the range, and they can review the supporting documentation and ask the filer for more details, if desired. A unit value outside the Census range does not necessarily mean that Customs will review the transaction or make changes to its database. For example, Customs’ procedures provide that no changes to the Import Detailed Data Base generally are required for nontextile commodities if the total value of the transaction is less than $10,000 and no quota or voluntary restraint agreement is involved. Also, Customs officials may choose to take no action or correct only portions of the data, such as those necessary to ensure the proper collection of duties and fees. We examined the supporting documentation for 80 transactions and found that 15 had unit values that were either higher or lower than the Census ranges. In all but 1 of these 15 cases, the filers had made errors in entering the HTS code, the quantity, or the total value into ACS. The only transaction that fell outside of the Census ranges, but was properly entered, was a shipment of tire cord fabric in which the high unit value of $44.64 a kilogram was due to its being a prototype item with a small quantity. Customs officials had not made corrections to the Import Detailed Data Base on any of the 14 transactions we examined and on which we found errors. In some cases, however, the officials had made corrections to the entry summary documents, duties and fees charged, or other modules of ACS. One limitation in the Census ranges is that they are so broad they are of little use in identifying any but the most extreme variations from the norm. This limitation occurs because the Census ranges were designed to detect only those unit values it considered most likely to be erroneous. According to Census, a group of transactions falling outside of a range may indicate the need to adjust the range for a number of reasons, including natural value fluctuations, a change in the diversity of the products included in a particular category, incorrect reporting, or new products entering the trade flow. For the 8 commodities we selected, only 196 (or 1.8 percent) of the 11,100 transactions in 1992 fell outside of the Census ranges. The Trade Agreements Act of 1979 (P.L. 96-39) established one primary valuation method—transaction value—and four secondary methods for determining customs value. Under the transaction value method, Customs generally accepts the price agreed to between the buyer and the seller as the basis for Customs’ valuation as compared to the more complex procedures of the prior valuation system. In practice, Customs officials said that Customs relies on the value declared by the filer unless it has some reason to question the value’s accuracy. In 1990, Customs officials became concerned that valuation had become a low priority within Customs and performed an internal valuation review. The study confirmed the need to re-emphasize valuation in the entry process so that Customs would be better equipped to detect importer attempts to manipulate valuation laws and regulations. Since its 1990 study, Customs has taken several courses of action to address concerns on the valuation of imports. These actions include establishing valuation as one of six priorities in Customs’ Trade Enforcement Strategy Plan, creating a National Valuation Center to help implement the Strategy Plan, increasing training of import specialists on valuation issues, increasing analysis of valuation in enforcement and compliance activities, and implementing an Entry Summary Review Program to increase uniformity in the classification and appraisement of imports. Customs’ analyses of unit values identified the same types of anomalies we found in our review. For example, an enforcement initiative in 1992, which studied shipments into the Miami District, found asparagus valued at $7 a kilogram compared with a world average of $1.38 a kilogram and dryers with a unit value range of $4.24 to $746,723 each. Similarly, in 1993, national import specialists in New York analyzed 1,199 shipments of automatic typewriters and word processing machines and found unit values that ranged from $1.83 to $17,937 each, with an average of $124.67 each. Customs also identified some of the same causes for unit value variations that we identified. An April 1994 Quality Assurance Review draft report, which dealt with the statistical reporting of trade data, pointed out that the wrong HTS codes were entered in ACS because (1) the codes were difficult to interpret and use, (2) the filers did not have sufficient expertise in determining the proper code, and (3) there were few disincentives for using the wrong code. The report also agreed that the Census ranges on valuation were too broad. The report made a number of recommendations for improving the entry, use, and screening of valuation data. These recommendations were preliminary and had been disseminated for field comment; thus, we did not evaluate them. Customs currently is redesigning its entry summary selectivity process, which defines the procedures followed in selecting import documentation for further review by import specialists. This redesign is part of a larger redesign effort, which also is considering changes in the way cargo is selected for physical inspection. Customs officials have not yet determined the degree to which valuation will be a part of the entry summary selectivity process redesign, although they said it may play a prominent role. Customs officials said that changing the way the Census ranges are used presents a dilemma. The Customs officials said that they realize the current ranges are too broad to detect many errors and that they had considered narrowing them. However, while narrowing the ranges would identify more problem entries, this action also would (1) create the need for reviewing more entries that do not have a problem and (2) divert Customs’ resources from other endeavors. Nevertheless, Customs officials said they will continue to look for ways to improve the use of unit value screening mechanisms. We asked the Customs officials whether they had considered using two sets of ranges—one fairly narrow set for the filer and a broader set for Customs and Census. Such a system would place more of the burden on the filers who are making the errors and would encourage these filers to use greater care when entering data. Since Customs and Census could continue to use the broader ranges for their own purposes, any increased workload for the agencies would be minimized. One of the commodities we selected for analysis, hypodermic syringes, can be used as a hypothetical example of how narrower ranges may be beneficial. At the time of our review, the acceptable Census range for this commodity was from $0.01 to $500 each, with only 2 of the 417 transactions for the year falling outside of this range. However, had the Census range been $0.05 to $6.68 each—the unit values at the 20th and 80th percentiles for all transactions during fiscal year 1992 ranked by descending unit values—125 of the 417 transactions would have fallen outside of the range. Included in the transactions that would have been questioned under the new range, but not the old range, was a shipment of 600 syringes with a unit value of $95.35 each. We determined that this shipment should have been recorded at a quantity of 319,800 and a unit value of $0.18. While we could not determine how many other transactions were in error, we did note that a total of 24 transactions had a unit value of more than $40 each, which Customs officials said is improbable for a single syringe. Customs officials said that, while a two-tiered set of unit value ranges merited consideration, they had not considered such a process and were not sure whether it could be done within the current system. The officials planned to study the feasibility of a two-tiered process, but they had not done so at the completion of our work. On the basis of our analysis of eight commodities imported during 1992, unit values did vary widely, with the highest values ranging from 4 times to almost 1 million times the lowest values. Certain unit values—such as pantyhose priced as low as $0.00 a dozen pair and as high as $1,267.50 a dozen pair—appeared implausible. We found two primary causes for these wide-ranging values. First, the commodity definitions themselves may be so broad that they cover a diverse group of products with correspondingly diverse values. Second, the importers and brokers may enter the wrong classification code, quantity, or total value into Customs’ ACS. Thus, many of the unit values being calculated from the Import Detailed Data Base may be incorrect. Our analysis does not allow us to make any generalizations about error rates across all commodities or even within the commodities we examined. However, the high overall error rate (errors in 45 of 80 transactions); the frequency of errors in HTS codes, which affects both the incorrect commodity and the correct commodity; and the fact that Customs’ own research has also shown a high number of errors lead to concerns about the accuracy of these data. The errors we found did not cause a loss of revenues or problems with quotas in relation to the limited number of commodities and transactions we examined. However, our analysis has demonstrated the potential for errors to affect revenues, quotas, and trade statistics. The errors also could lead to difficulties for Customs in using unit value ranges to identify data errors and import compliance problems. To improve the quality of filer data, Customs could consider adding narrower unit value ranges to ACS at the point of data entry, thereby weighing the benefits of such a change against the costs to importers. We recommend that the Secretary of the Treasury direct the Commissioner of Customs to determine the feasibility of adding narrower unit value ranges to Customs’ ACS that will allow the filer to identify and correct more errors at the point of data entry. If the Commissioner finds that such ranges are feasible and cost effective, he should take the appropriate steps to implement them. The Customs Service and the Bureau of the Census provided written comments on a draft of this report. Customs agreed with our conclusions and recommendation and discussed recent actions that it had taken to increase the accuracy of data that are reported for trade statistics. Customs stated that, by placing emphasis on improving overall compliance levels through its Compliance Measurement program, major improvements will be made in the level of compliance with a resultant increase in the quality of trade data. Customs also discussed a pilot program that will use reasonable maximum and minimum unit values to screen entries for potential errors and discrepancies. Also, Customs said it is working in partnership with Census to ensure that the ACS redesign program will provide a long-term basis for overall statistical improvement. Census stated that it believed the report should have specified that ACS provides Customs with the capability to override numerous Census edits including price range and quantity requirements. We agree with this point. On pages 10 to 11, we discuss ACS procedures for screening each automated entry for unit value anomalies and Customs’ review of particular entries that fall above or below the Census range. Our primary concern is Customs’ use of the data to ensure compliance and to generate accurate trade statistics. In this regard, we recommend that Customs determine the feasibility and cost effectiveness of developing narrower unit value ranges for its own use. Census also believed clarification was needed in our statement that Census may broaden the unit value range when too many transactions fall outside the range. Census stated that it does not automatically adjust a range and that the more likely scenario is that adjustments are a reaction to new products entering the trade flow. One of the ways of identifying new products is through groups of transactions falling outside an established range. We have modified the language on page 11 accordingly. Our main point is that the ranges are too broad for any practical use of the unit values as a screening device by Customs in ensuring compliance and accuracy of transaction data. We are providing copies of this report to the Secretary of the Treasury, the Secretary of Commerce, the Commissioner of Customs, and other interested parties. Copies also will be made available to others upon request. Major contributors to this report are listed in appendix XIII. If you need additional information or have any questions, please contact me at (202) 512-8777. On October 23, 1992, the Chairman of the Subcommittee on Oversight, House Committee on Ways and Means, requested that we conduct a study of unit values of imports and exports. His concerns were based on work in 1990 by two professors from Florida International University (FIU), which found significant variations in the unit values of seemingly identical commodities. Specifically, the Chairman asked us to assess the risk of false pricing of imports and exports as a cover for money laundering, how such schemes were being used, the pervasiveness of the problem, and the federal response needed. On September 14, 1993, we briefed the Subcommittee on our work to date. We said that laundering money through manipulative import and export pricing is possible, however, it would be difficult since (1) illicit currency would already have to be laundered once by getting it into the banking system and (2) easier methods of laundering money exist, such as simply smuggling it out of the country. Neither we nor the Customs Service had found evidence of any widespread import and export pricing schemes. On the basis of our analyses of selected transactions, we believe the more likely explanation was that the variations were the product of erroneous data being provided to Customs by the industry. The Subcommittee noted that the original request letter was broad and it was concerned with the overall issue of import valuation, not just money laundering. They asked that we continue our work, but refocus our analysis. In this regard, we agreed to limit our scope to imports and to revise our objectives to determine (1) how widely unit values for identical types of commodities varied and (2) why such variations occurred. They further agreed to our providing detailed analyses of judgmentally selected commodities and transactions, recognizing that the results would be illustrative, but not projectable. As the focus of our study, we obtained from Customs the Import Detailed Data Base, commonly referred to as the IM115 database, for fiscal year 1992, which was the most recent year available. These data, extracted from Customs’ Automated Commercial System (ACS) for use by Census in developing trade statistics, include all import transactions for the year. In total, the files included 15,022,423 records. We used the Harmonized Tariff Schedule (HTS) of the United States as the source for selecting commodities. The HTS provides the official classification codes and descriptions for more than 14,000 types of commodities subject to importation into the United States. The HTS also provides information on the duties, fees, and quotas. We selected eight commodities for detailed analysis. These were pantyhose, raw cane sugar, scrap gold, tire cord fabric, unsweetened cocoa, wood dowel rods, hypodermic syringes, and facsimile machines. While the selections were judgmental, we followed some general criteria. Thus, we chose commodities that would appear to have a relatively narrow product description. The one exception was facsimile machines, which were known to have a broad definition and were chosen for comparison. We chose three commodities (raw cane sugar, tire cord fabric, and pantyhose) that were subject to quotas. We chose two commodities (scrap gold and pantyhose) that had been studied earlier by Customs and were known to have unit value anomalies. We also chose one (scrap gold) that had been included in the FIU study. At Customs’ recommendation, we restricted our analysis of the Import Detailed Data Base to entries listed as “consumption entry” or “warehouse withdrawals.” This restriction was to ensure we were looking at original entries only and to prevent double counting. We then extracted data from the following fields on each of the commodities selected: entry date, importer, consignee, quantity of items in shipment, Customs’ valuation of shipment, port of entry, method of transportation, and country of origin. At Customs’ recommendation, we did not use the unit price variable in the Import Detailed Data Base, but rather calculated unit value on our own by dividing the Customs valuation by quantity shipped. For each commodity, we ranked the individual shipments or transactions in descending order by unit value. We then divided the overall distribution of transactions for each commodity into deciles. Since many transactions had the same unit value, the number of transactions in each decile varied in some instances. We also developed analyses for each commodity showing the number of transactions, total quantity, total value, highest unit value, lowest unit value, median (by quantity and number of shipments) unit value, and average unit value by country of origin, importers, U.S. port of entry, and method of transport. For our transaction analysis, we selected 10 transactions for each of the 8 commodities. Again, we selected these judgmentally but used some broad criteria in making the selections. We selected transactions that would give us a range of values across (although not necessarily in each of) the deciles, a representation of the extremely high and extremely low unit values, a range across importers, a comparison of transactions by the same importer, comparisons between the number of shipments and quantity shipped by the same importer, and a range of quantities shipped. We also used individual criteria for selected commodities. For example, we were interested in transactions of scrap gold where the unit value was more than the value of pure gold, a transaction of raw cane sugar that accounted for more than half of the quantity imported during the year, and transactions on quota commodities where the quantities appeared too small for the values cited. Because we did not randomly sample the commodities or transactions, we cannot generalize about the overall level of errors in the Import Detailed Data Base. To verify the correct unit value for each of the transactions, we obtained the supporting documentation maintained by Customs. These documents included such items as the entry summary, invoices, shipping documents, packing lists, certifications of quota eligibility, laboratory reports, and miscellaneous memoranda. We compared the quantities, values, and HTS codes shown in the Import Detailed Data Base with these documents. Where we noted discrepancies or could not determine the correct amount, we contacted the cognizant officials at Customs’ ports and districts to determine what the correct entries should have been. We also discussed each commodity and transaction with Customs’ cognizant National Import Specialist in New York as well as with Customs’ port representatives when more information was needed. We obtained and analyzed other data on the transactions from Customs’ ACS to determine the amounts of duties and fees paid, questions, if any, raised and resolved during the entry process, etc. In some cases, Customs officials obtained information directly from the importers or brokers for our use; however, we did not contact the importers and brokers ourselves. Because the only unit value screens in Customs’ ACS were the ranges devised by the Census Bureau, we discussed each of the commodities selected with Census officials and attempted to determine how transactions with unit values outside the Census ranges were resolved. The data available were limited, because neither Census nor Customs maintains a complete record of what was questioned or how the matter was resolved. We met with Customs officials in Washington, D.C.; Atlanta; Miami, FL; and New York to discuss enforcement activities, activities related to the entry selectivity redesign project, quality assurance reviews, and other special projects. We also held telephone discussions with Customs’ import specialists at various Customs’ ports and districts nationwide. UNIT OF MEASUREMENT: Gram DESCRIPTION: This category includes gold waste and scrap, including metals clad with gold. It does not include sweepings containing other precious metals or gold-plated items. No distinction is made within the code for the weight or purity (e.g., 10 carat, 14 carat, 24 carat, etc.). Quantity (grams) Quantity (grams) Quantity (grams) Quantity (grams) Includes France, Andorra, and Monaco. Quantity (grams) Importer name deleted to avoid identification with trade-sensitive data. Quantity (grams) Includes pipeline and powerhouse. Quantity (grams) Quantity understated by 2,048 grams. No effect on duties and fees. Unit value changed. Quantity understated by 4,601 grams. No effect on duties and fees. Unit value changed. Quantity understated by 175,660 grams. No effect on duties and fees. Unit value changed. Quantity understated by 181,878 grams. No effect on duties and fees. Unit value changed. Quantity and total value understated by 6,000 grams and $5,000, respectively. No effect on duties and fees. Unit value changed. Quantity overstated by 2,340 grams. No effect on duties and fees. Unit value changed. Quantity overstated by 41,800 grams. No effect on duties and fees. Unit value changed. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Dozen pair DUTY: The duty ranges from free to 72 percent of value, depending on the country. DESCRIPTION: Products in this category include hosiery from fabric that is made of synthetic fibers measuring less than 67 decitex per single yarn. The level of decitex in the hosiery determines the sheerness or the heaviness of the material; a low level means that the stocking is sheer, and a higher level means that it will be heavier. The range of products includes various styles and ranges of pantyhose, tights, and stockings for varicose veins. Quantity (dozen pair) Quantity (dozen pair) Quantity (dozen pair) Quantity (dozen pair) Quantity (dozen pair) Importer name deleted to avoid identification with trade-sensitive data. Quantity (dozen pair) Quantity (dozen pair) Quantity understated by 145 dozen pair. No effect on duties and fees. Unit value changed. Quantity overstated by 24 dozen pair and total value understated by $270.00. Duties underpaid by $45.90. Fees underpaid by $0.34. Unit value changed. Quantity overstated by 35,750 dozen pair. No effect on duties and fees. Unit value changed. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Each unit DUTY: The duty ranges from free to 35 percent of the value, depending on the country. DESCRIPTION: This commodity is an electrical apparatus which electronically transmits and reproduces printed material. The category is extremely broad, covering items from simple units for home use to elaborate units integrated into complex commercial applications. Quantity (each) Quantity (each) Quantity (each) Quantity (each) Quantity (each) Importer name deleted to avoid identification with trade-sensitive data. Quantity (each) Quantity (each) Entry should have been made under another category covering other telegraphic apparatus (HTS 8517.82.00.80). No effect on duties and fees. Total value overstated by $27,654. No effect on duties and fees. Unit value changed. Quantity understated by 135 units. No effect on duties and fees. Unit value changed. Entry should have been made under another category covering other parts of telegraphic apparatus (HTS 8517.90.80.00). Quantity and value overstated by 200 units and $42,100. No effect on duties and fees. Unit value changed. Entry should have been made under another category covering modems for automatic data processing machines (HTS 8517.40.10.00). No effect on duties and fees. Quantity overstated by 1,056 units. No effect on duties and fees. Unit value changed. Entry should have been made under another category covering parts for telegraphic terminal apparatus (HTS 8517.90.70.00). No effect on duties and fees. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Each unit DUTY: The duty ranges from free to 60 percent of value, depending on the country. DESCRIPTION: A hypodermic syringe is an instrument used in medical, surgical, dental, or veterinary procedures to inject fluids. This particular HTS is for hypodermic syringes (with or without needle), which are used for medical purposes. Quantity (each) Quantity (meters) Quantity (each) Quantity (each) Quantity (each) Importer name deleted to avoid identification with trade-sensitive data. Quantity (each) Quantity (each) N/A Entry should have been made under other instruments and appliances (HTS 9018.19.80.60). Quantity understated by 19 units. Duty overpaid by $146.37. No effect on fees. Unit value changed. Quantity understated by 319,200 units. No effect on duties and fees. Unit value changed. Quantity understated by 3,009,096 units. No effect on duties and fees. Unit value changed. Quantity overstated by 540,000 units. No effect on duties and fees. Unit value changed. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Kilogram QUOTA: Sugar is under a tariff rate quota and only those countries with a quota can export sugar to the United States. The United States imposes a quantitative sugar quota on over 50 countries, and imports in excess of the quota are subject to a higher duty. DUTY: The regular duty for this type of sugar ranges from free to $0.043817 per kilogram, depending on the country. Imports in excess of the quota are subject to a duty of $0.37386 per kilogram. In addition, sugar imports are subject to a sugar fee of $0.022 per kilogram. DESCRIPTION: This category includes raw cane sugar, which is in solid form and (1) contains no added flavoring or coloring matter; (2) has a dry-state sucrose content that, by weight, corresponds to a polarity reading of less than 99.5 degrees; and (3) is not to be further refined or improved in quality. This is a relatively small and narrow category of sugar, falling between the still-to-be processed raw sugar traded on the world market and the highly refined sugars commonly available for general use as a sweetener. Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Importer name deleted to avoid identification with trade-sensitive data. Quantity (kilograms) Quantity (kilograms) Quantity overstated by 16 kilograms. No effect on duties and fees. Total value overstated by $13,077. No effect on duties and fees. Unit value changed. Quantity overstated by 668 kilograms. No effect on duties and fees. Unit value changed. Entry should have been made under another category of cane sugar (HTS 1701.99.01.35). No effect on duties and fees. Entry should have been made under another category of cane sugar (HTS 1701.99.01.35). No effect on duties and fees. Entry should have been made under another category of cane sugar (HTS 1701.99.01.35). No effect on duties and fees. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Meter DUTY: The duty ranges from free to 5 percent of the value, depending on the country. DESCRIPTION: Wood dowel rods are round pieces of wood of various lengths and diameters. They have many uses, such as in the manufacturing of furniture, mop and broom handles, and coat racks. Quantity (meters) Quantity (meters) Quantity (meters) Quantity (meters) Quantity (meters) Importer name deleted to avoid identification with trade-sensitive data. Quantity (meters) Quantity (meters) Quantity understated by 4,616 meters. No effect on duties and fees. Unit value changed. Quantity and value overstated by 200 meters and $1,270 respectively. No effect on duties; fees overpaid by $2.15. Quantity overstated by 1,027,783 meters. No effect on duties and fees. Unit value changed. Quantity overstated by 2,483,425 meters. No effect on duties and fees. Unit value changed. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Kilogram DUTY: The duty ranges from 0.7 to 25 percent of value, depending on the country. DESCRIPTION: Tire cord fabric is a strong, heat resistant material that is used to manufacture tires. The fabric has a high level of tenacity. Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Importer name deleted to avoid identification with trade-sensitive data. Quantity (kilograms) Quantity (kilograms) Entry should have been made under another category of tire cord. Effect on duties unknown because correct HTS is unknown. No effect on fees. Quantity overstated by 1,832 kilograms. Entry should have been made under polyurethane impregnated textile fabric (HTS 5903.20.25.00). No effect on duties and fees. Unit value changed. Quantity overstated by 296 kilograms. No effect on duties and fees. Unite value changed. Quantity overstated by 317 kilograms. No effect on duties and fees. Unit value changed. Quantity overstated by 297 kilograms. No effect on duties and fees. Unit value changed. Quantity and value overstated by 1,269 kilograms and $264 respectively. Duties overpaid by $10.30 and fees overpaid by $0.18. Unit value changed. Quantity overstated by 144 kilograms. Entry should have been made under another category of tire cord. Effect on duties unknown because correct HTS is unknown. No effect on fees. Unit value changed. Legend: N/A = Not applicable. UNIT OF MEASUREMENT: Kilogram DUTY: The duty ranges from free to $0.066 per kilogram, depending on the country. DESCRIPTION: This category covers cocoa powder that contains no added sugar or other sweetening matter. It does not include similar commodities, such as cocoa butter, paste, or chocolate preparations. Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Quantity (kilograms) Importer name deleted to avoid identification with trade-sensitive data. Quantity (kilograms) Quantity (kilograms) Entry should have been made under another category covering chocolate and other food preparations containing cocoa (HTS 1806.20.80.60). No effect on duties and fees. Quantity understated by 10,100 kilograms. No effect on duties and fees. Unit value change. Entry should have been made under another category of cocoa (HTS 1803.20.00.00), also quantity understated by 97,200 kilograms. No effect on duties and fees. Unit value changed. Quantity and total value overstated by 20,123 kilograms and $22,210, respectively. No effect on duties and fees. Unit value changed. Quantity and total value overstated by 250 kilograms and $152, respectively. Duty overpaid by $1.55 and fees overpaid by $0.26. Entry should have been made under another category of cocoa (HTS 1803.20.00.00). No effect on duties and fees. Quantity understated by 8,164 kilograms. No effect on duties and fees. Unit value changed. Legend: N/A = Not applicable. Frankie L. Fulton, Evaluator-in-Charge Paul W. Rhodes, Senior Evaluator Cheri Y. White, Evaluator Paul R. Clift, Computer Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the importation of eight classifications of commodities in fiscal year 1992, focusing on why unit values for identical types of imported commodities varied. GAO found that: (1) unit values for identical imports varied widely because of overly broad commodity classifications and product misclassifications; (2) although Customs used Census Bureau-developed parameters to screen filers' entries and detect possible unit value errors that could adversely affect the quality of trade data, most of the parameters were so broad that Customs only detected errors involving extremely high or low unit values; (3) the errors that were noted had little effect on quotas, duties, and fees for the 80 transactions reviewed; (4) although generalizations about the overall level of errors in the Import Detailed Data Base could not be made, the high number of errors found indicated the need to improve Automated Commercial System (ACS) data accuracy; (5) a high error rate could threaten the accuracy of U.S. trade statistics and Customs ability to screen transactions for errors or illegal activities; and (6) adding narrower unit value ranges to ACS would allow filers to identify and correct more errors during data entry. |
Military ranges and training areas are used primarily to test weapon systems and train military forces. Required facilities include air ranges for air-to-air, air-to-ground, drop zone, and electronic combat training; live-fire ranges for artillery, armor, small arms, and munitions training; ground maneuver ranges to conduct realistic force-on-force and live-fire training at various unit levels; and sea ranges to conduct ship maneuvers for training. According to DOD officials, there has been a slow but steady increase in encroachment issues that have limited the use of training facilities, and the gradual accumulation of these issues increasingly threatens training readiness. DOD has identified eight such encroachment issues: Designation of critical habitat under the Endangered Species Act of 1973. Under the Act, agencies are required to ensure that their actions do not destroy or adversely modify habitat that has been designated for endangered or threatened species. Currently, over 300 such species are found on military installations. In 1994, under the previous administration 14 agencies signed a federal memorandum of understanding for implementing the Endangered Species Act. The agencies agreed to establish or use existing regional interagency working groups to identify geographic areas within which the groups would coordinate agency actions and overcome barriers to conserve endangered species and their ecosystems. Such cooperative management could help DOD share the burden of land use restrictions on military installations that are caused by encroachment issues, but implementation of this approach has been limited. We are currently reviewing this issue. Application of environmental statutes to military munitions. DOD believes that the Environmental Protection Agency could apply environmental statutes to the use of military munitions, shutting down or disrupting military training. According to DOD officials, uncertainties about future application and enforcement of these statutes limit their ability to plan, program, and budget for compliance requirements. Competition for radio frequency spectrum. The telecommunications industry is pressuring for the reallocation of some of the radio frequency spectrum from DOD to commercial control. DOD reports that over the past decade, it has lost about 27 percent of the frequency spectrum allocated for aircraft telemetry. And we previously reported additional allocation of spectrum could affect space systems, tactical communications, and combat training. Marine regulatory laws that require consultation with regulators when a proposed action may affect a protected resource. Defense officials say that the process empowers regulators to impose potentially stringent measures to protect the environment from the effects of proposed training in marine environments. Competition for airspace. Increased airspace congestion limits the ability of pilots to train as they would fly in combat. Clean Air Act requirements for air quality. DOD officials believe the Act requires controls over emissions generated on Defense installations. New or significant changes in range operations also require emissions analyses, and if emissions exceed specified thresholds, they must be offset with reductions elsewhere. Laws and regulations mandating noise abatement. DOD officials stated that weapon systems are exempt from the Noise Control Act of 1972, but DOD must assess noise impact under the National Environmental Policy Act. As community developments have expanded closer to military installations, concerns over noise from military operations have increased. Urban growth. DOD says that unplanned or “incompatible” commercial or residential development near training ranges compromises the effectiveness of training activities. Local residents have filed lawsuits charging that military operations lowered the value or limited the use of their property. To the extent that encroachment adversely affects training readiness, opportunities exist for the problems to be reported in departmental and military service readiness reports. The Global Status of Resources and Training System is the primary means units use to compare readiness against designed operational goals. The system’s database indicates, at selected points in time, the extent to which units possess the required resources and training to undertake their wartime missions. In addition, DOD is required under 10 U.S.C. 117 to prepare quarterly readiness reports to Congress. The reports are based on briefings to the Senior Readiness Oversight Council, a forum assisted by the Defense Test and Training Steering Group. In June 2000, the council directed the steering group to investigate encroachment issues and develop a comprehensive plan of action. The secretaries of the military services are responsible for training personnel and for maintaining their respective training ranges and facilities. Within the Office of the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness develops policies, plans, and programs to ensure the readiness of the force and provides oversight on training; the Deputy Under Secretary of Defense for Installations and Environment develops policies, plans, and programs for DOD’s environmental, safety, and occupational health programs, including compliance with environmental laws, conservation of natural and cultural resources, pollution prevention, and explosive safety; and the Director, Operational Test and Evaluation, provides advice on tests and evaluations. On the basis of what we have seen, the impact of encroachment on training ranges has gradually increased over time, reducing some training capabilities. Because most encroachment problems are caused by urban development and population growth, these problems are expected to increase in the future. Although the effects vary by service and by individual installation, encroachment has generally limited the extent to which training ranges are available or the types of training that can be conducted. This limits units’ ability to train as they would expect to fight and causes workarounds that may limit the amount or quality of training. Installations overseas all reported facing similar training constraints. Some of the problems reported by installations we visited last year were those related to urban growth, radio frequency spectrum interference, air quality, noise, air space, and endangered species habitat. For example, in response to local complaints, Fort Lewis, Washington, voluntarily ceased some demolitions training. Eglin Air Force Base, Florida, officials reported the base’s major target control system received radio frequency spectrum interference from nearby commercial operators. Nellis Air Force Base, Nevada, officials reported that urban growth near the base and related safety concerns had restricted flight patterns of armed aircraft, causing mission delays and cancellations. They also reported that they receive approximately 250 complaints about noise each year. About 10 percent of Marine Corps Base Camp Pendleton, California, had been designated as critical habitat for endangered species. Atlantic Fleet officials reported encroachment problems stemming from endangered marine mammals and noise. They said that the fleet’s live-fire exercises at sea were restricted, and night live-fire training was not allowed. More recently, in January 2003, DOD’s Special Operations Command reported that its units encounter a number of obstacles when scheduling or using training ranges. According to the report, the presence of endangered species and marine mammals on or near ranges result in restrictions on training for at least part of the year—closing the area to training, prohibiting live fire, or requiring modified operations. For example, a variety of endangered species live on the training areas of the Navy Special Warfare Command in California, particularly on Coronado and San Clemente islands. Due to environmental restrictions, Navy Special Warfare units report that they can no longer practice immediate action drills on Coronado beaches; they cannot use training areas in Coronado for combat swimmer training; and they cannot conduct live-fire and maneuver exercises on much of San Clemente Island during some seasons. In addition, the Special Operations Command owns no training ranges of its own and largely depends on others for the use of their training ranges. As a result, command officials advised us that they must train under operational and scheduling restrictions imposed by its host commands. For example, the command normally trains at night; and because range management personnel are not often available at night, this prevents such training. Also, on many ranges, the command reported that priority is given to larger units than special operations units causing it to postpone or cancel training. According to the report, ranges are also inadequately funded for construction, maintenance, repairs, and upgrades. This results in some commanders using their own funds in order to prevent the ranges from becoming dangerous or unusable. The Special Operations Command, while expressing concern for the future, reported that none of the eight encroachment issues identified by DOD had yet stopped military training, due mostly to the creativity and flexibility of its commanders and noncommissioned officers. In general, when obstacles threaten training, the unit will find a workaround to accomplish the training. In some instances, the unit may travel to another training facility, costing additional money for transportation and potentially requiring an extended stay at the training site. By sending units away to train, the command limits its ability to send people on future travel for training or missions due to efforts to control the number of days per year that servicemembers are deployed away from home. Other workarounds consist of commands using different equipment, such as plastic-tipped bullets; changing maneuvering, firing, and training methods to overcome training obstacles; and using facilities that need repair. According to the Special Operations Command, all of these workarounds expend more funds and manpower in order to accomplish its training mission. DOD and military service officials said that many encroachment issues are related to urban growth around military installations. They noted that most, if not all, encroachment issues result from urban and population growth and that around DOD installations this is increasing at a rate higher than the national average. Figure 1 illustrates the increase in urban growth encroachment near Fort Benning, Georgia, while the fort has remained relatively unchanged. According to DOD officials, new residents near installations often view military activities as an infringement on their rights, and some groups have organized in efforts to reduce operations such as aircraft and munitions training. At the same time, according to Defense officials, the increased speed and range of weapon systems are expected to increase training range requirements. Despite the loss of some training range capabilities, service readiness data did not show the impact of encroachment on training readiness. However, DOD’s January 2003 quarterly report to Congress did tie an Air Force training issue directly to encroachment. Even though DOD officials in testimonies and many other occasions have repeatedly cited encroachment as preventing the services from training to standards, DOD’s primary readiness reporting system did not reflect the extent to which encroachment was a problem. In fact, it rarely cited training range limitations at all. Similarly, DOD’s quarterly reports to Congress, which should identify specific readiness problems, hardly ever mentioned encroachment as a problem. This is not surprising to us because we have long reported on limitations in DOD’s readiness reporting system and the need for improvements; our most recent report was issued just last week. Furthermore, on the basis of our prior reports on readiness issues and our examination of encroachment, we do not believe the absence of data in these reports concerning encroachment should be viewed simply as “no data, no problem!” Rather, as with other readiness issues we have examined over time, it suggests a lack of attention on the part of DOD in fully assessing and reporting on the magnitude of the encroachment problem. However, DOD’s most recent quarterly report did indicate a training issue that is tied directly to encroachment. The January 2003 Institutional Training Readiness Report showed that the Air Force has rated itself as C-2 for institutional flight training. This indicates that it is experiencing some deficiencies with limited impact on capabilities to perform required institutional training. The Air Force attributed this to training range availability and encroachment combined with environmental concerns that are placing increasing pressure on its ability to provide effective and realistic training. The Air Force also reported that sortie cancellations are becoming a more common occurrence and may soon adversely impact the quality of training. For example, the spotting of a Sonoran Pronghorn on the Barry M. Goldwater Range forces immediate cancellation or relocation of scheduled missions. Readiness reporting can and should be improved to address the extent of training degradation due to encroachment and other factors. However, it will be difficult for DOD to fully assess the impact of encroachment on its training capabilities and readiness without (1) obtaining more complete information on both training range requirements and the assets available to support those requirements and (2) considering to what extent other complementary forms of training may help mitigate some of the adverse impacts of encroachment. The information is needed to establish a baseline for measuring losses or shortfalls. We previously reported that the services did not have complete inventories of their training ranges and that they do not routinely share available inventory data with each other (or with other organizations such as the Special Operations Command). DOD officials acknowledge the potential usefulness of such data and have some efforts underway to develop these data. However, since there is no complete directory of DOD- wide training areas, commanders sometimes learn about capabilities available on other military bases by chance. All this makes it extremely difficult for the services to leverage assets that may be available in nearby locations, increasing the risk of inefficiencies, lost time and opportunities, delays, added costs, and reduced training opportunities. Although the services have shared training ranges, these arrangements are generally made through individual initiatives, not through a formal or organized process that easily and quickly identifies all available infrastructure. Last year, for example, our reported on encroachment noted that the Navy Special Operations forces recently learned that some ranges at the Army’s Aberdeen Proving Grounds in Maryland are accessible from the water—a capability that is a key requirement for Navy team training. Given DOD’s increasing emphasis on joint capabilities and operations, having an inventory of defense-wide training assets would seem to be a logical step toward a more complete assessment of training range capabilities and shortfalls that may need to be addressed. This issue was recently reinforced by the January 2003 range report by the Special Operations Command, which found that none of the services had joint databases or management tools to combine all training ranges into a single tool accessible to all commands. The command concluded that such a centralized database would contribute to improving unit readiness and mission success for all components. At the same time, we cannot be sure of the extent to which recent military operations in the Middle East could impact future training requirements. DOD will need to reassess lessons learned from these operations. Each service has, to varying degrees, assessed its training range requirements and limitations due to encroachment. For example, the Marine Corps has completed one of the more detailed assessments of the degree to which encroachment has affected the training capability of Camp Pendleton, California. The assessment determined to what extent Camp Pendleton could support the training requirements of two unit types and two specialties by identifying the tasks that could be conducted to standards in a “continuous” operating scenario (e.g., an amphibious assault and movement to an objective) or in a fragmented manner (tasks completed anywhere on the camp). The analysis found that from 60 to 69 percent of continuous tasks and from 75 to 92 percent of the other training tasks could be conducted to standards. Some of the tasks that could not be conducted to standards were the construction of mortar- and artillery- firing positions outside of designated areas, cutting of foliage to camouflage positions, and terrain marches. Marine Corps officials said they might expand the effort to other installations. At the same time, the Air Force has funded a study at Shaw Air Force Base, South Carolina, which focuses on airspace requirements; and the Center for Navy Analysis is reviewing encroachment issues at Naval Air Station Fallon, Nevada. We have not had an opportunity to review the progress or the results of these efforts. In its 2003 range study report, the Special Operations Command compiled a database identifying the training ranges it uses, type of training conducted, and restrictions on training. In its study, the command recommended that a joint training range database be produced and made available throughout DOD so that all training ranges, regardless of service ownership, may be efficiently scheduled and utilized. While recent efforts show increased activity on the part of the services to assess their training requirements, they do not yet represent a comprehensive assessment of the impacts of encroachments. We have also previously reported that the services have not incorporated an assessment of the extent that other types of complementary training could help offset shortfalls. We believe these assessments, based solely on live training, may overstate an installation’s problems and do not provide a complete basis for assessing training range needs. A more complete assessment of training resources should include assessing the potential for using virtual or constructive simulation technology to augment live training. However, based on our prior work I must emphasize, Mr. Chairman, that these types of complementary training cannot replace live training and cannot fully eliminate the impact of encroachment, though they may help mitigate some training range limitations. In addition, while some service officials have reported increasing costs because of workarounds related to encroachment, the services’ data systems do not capture these costs in any comprehensive manner. In its January 2003 report, the Special Operations Command noted that the services lacked a metric-base reporting system to document the impact of encroachment or track the cost of workarounds in either manpower or funds. We noted last year that DOD’s overall environmental conservation funding, which also covers endangered species management, had fluctuated, with an overall drop (except for the Army) in obligations since 1999. If the services are indeed conducting more environmental assessments or impact analyses as a result of encroachment, the additional costs should be reflected in their environmental conservation program obligations. DOD has made some progress in addressing individual encroachment issues, including individual action plans and legislative proposals. But more will be required to put in place a comprehensive plan that clearly identifies steps to be taken, goals and milestones to track progress, and required funding. Senior DOD officials recognized the need to develop a comprehensive plan to address encroachment issues back in November 2000, but efforts to do so are still evolving. To their credit, DOD and the services are increasingly recognizing and initiating steps to examine range issues more comprehensively and in a less piecemeal fashion. Recent efforts began in 2000 when a working group of subject matter experts was tasked with drafting action plans for addressing the eight encroachment issues. The draft plans include an overview and analysis of the issues; and current actions being taken, as well as short-, mid-, and long-term strategies and actions to address the issues. Some of the short- term actions implemented include the following. DOD has finalized, and the services are implementing, a Munitions Action Plan—an overall strategy for addressing the life-cycle management of munitions to provide a road map that will help DOD meet the challenges of sustaining its ranges. DOD formed a Policy Board on Federal Aviation Principles to review the scope and progress of DOD activities and to develop the guidance and process for special use air space. DOD formed a Clean Air Act Services’ Steering Committee to review emerging regulations and to work with the Environmental Protection Agency and the Office of Management and Budget to protect DOD’s ability to train. DOD implemented an Air Installation Compatible Use Zone Program to assist communities in considering aircraft noise and safety issues in their land use planning. Some future strategies and actions identified in the draft plans addressing the eight encroachment issues include the following. Enhancing outreach efforts to build and maintain effective working relationships with key stakeholders by making them aware of DOD’s need for training ranges, its need to maintain readiness, and its need to build public support for sustaining training ranges. Developing assessment criteria to determine the cumulative effect of all encroachment restrictions on training capabilities and readiness. The draft plan noted that while many examples of endangered species/critical habitat and land use restrictions are known, a programmatic assessment of the effect these restrictions pose on training readiness has never been done. Ensuring that any future base realignment and closure decisions thoroughly scrutinize and consider the potential encroachment impact and restrictions on operations and training of recommended base realignment actions. Improving coordinated and collaborative efforts between base officials and city planners and other local officials in managing urban growth. In December 2001, the Deputy Secretary of Defense established a senior- level Integrated Product Team to act as the coordinating body for encroachment efforts and to develop a comprehensive set of legislative and regulatory proposals by January 2002. The team agreed on a set of possible legislative proposals for clarifying some encroachment issues. After internal coordination deliberations, the proposals were submitted in late April 2002 to Congress for consideration. According to DOD, the legislative proposals sought to “clarify” the relationship between military training and a number of provisions in various conservation and compliance statutes, including the Endangered Species Act, the Migratory Bird Treaty Act, the Marine Mammal Protection Act, and Clean Air Act. DOD’s proposals would, among other things, do the following: Preclude designation under the Endangered Species Act of critical habitat on military lands for which Sikes Act Integrated Natural Resources Management Plans have been completed. At the same time, the Endangered Species Act requirement for consultation between DOD and other agencies on natural resource management issues would remain. Permit DOD to “take” migratory birds under the Migratory Bird Treaty Act without action by the Secretary of the Interior, where the taking would be in connection with readiness activities, and require DOD to minimize the taking of migratory birds to the extent practicable without diminishment of military training or other capabilities, as determined by DOD. Modify the definition of “harassment” under the Marine Mammal Protection Act as it applies to military readiness activities. Modify the conformity provisions of the Clean Air Act. The proposal would maintain the Department’s obligation to conform military readiness activities to applicable state implementation plans but would give DOD 3 years to demonstrate conformity. In the meantime, DOD could continue military readiness activities. Change the definition of solid waste under the Solid Waste Disposal Act to generally exclude explosives, unexploded ordnance, munitions, munition fragments, or constituents when they are used in military training, research, development, testing and evaluation; when not removed from an operational range; when promptly removed from an off-range location; or when recovered, collected, and destroyed on range at operational ranges. Solid waste would not include buried unexploded ordnance when burial was not a result of product use. Of the above proposals, Congress passed, as part of the fiscal year 2003 defense authorization legislation, a provision related to the Migratory Bird Treaty Act. Under that provision, until the Secretary of the Interior prescribes regulations to exempt the armed forces from incidental takings of migratory birds during military readiness activities, the protections provided for migratory birds under the Act do not apply to such incidental takings. In addition, Congress authorized DOD to enter agreements to purchase property or property interests for natural resource conservation purposes, such as creating a buffer zone near installations to prevent encroachment issues, such as urban growth. In February 2003, DOD submitted to Congress the Readiness and Range Preparedness Initiative for fiscal year 2004. In it, the department restates a number of legislative proposals from 2002 and includes a proposal concerning the Marine Mammal Protection Act. In the 2004 initiative, the department seeks to reconcile military readiness activities with the Marine Mammal Protection Act by adding language to sections of title 16 of the U.S. Code. We are aware that consideration of these legislative proposals affecting existing environmental legislation will need to include potential tradeoffs among multiple policy objectives and issues on which we have not taken a position. At the same time, we also understand that DOD recently asked the services to develop procedures for invoking the national security exceptions under a number of environmental laws. Historically, DOD and the services have been reluctant to seek such exceptions; and we are aware of only a couple of instances where this has been done. Our two reports last year both recommended that DOD develop reports that accurately capture the causes of training shortfalls and objectively report units’ ability to meet their training requirements. At the time we completed our reviews in 2002, DOD’s draft action plans for addressing the eight encroachment issues had not been finalized. DOD officials told us that they consider the plans to be working documents and stressed that many concepts remain under review and may be dropped, altered, or deferred, while other proposals may be added. No details were available on overall actions planned, clear assignments of responsibilities, measurable goals and time frames for accomplishing planned actions, or funding requirements—information that would be needed in a comprehensive plan. Our report on stateside encroachment problems also recommended that DOD develop and maintain a full and complete inventory of service and department-wide training infrastructure; consider more alternatives to live training; and ensure that the plan for addressing encroachment includes goals, timelines, responsibilities, and projected costs. Our recently issued report on overseas training also recommended that DOD develop reports that accurately capture the causes of training shortfalls and objectively report units’ ability to meet their training requirements. Following our reports, DOD issued a range sustainment directive to establish policy and assign responsibilities for the sustainment of test and training ranges, and the Special Operations Command developed a database identifying the training ranges it uses, type of training conducted, and restrictions on training. In addition, DOD is working with the other regulatory agencies in the federal government to manage the way in which laws are enforced and plans to issue four more directives that cover outreach, range clearance, community noise, and Air Installation Compatibility Use Zone. | DOD faces growing challenges in carrying out realistic training at installations and training ranges--land, air, and sea--because of encroachment by outside factors. These include urban growth, competition for radio frequencies or airspace, air or noise pollution, unexploded ordnance and munition components, endangered species habitat, and protected marine resources. Building on work reported on in 2002, GAO assessed (1) the impact of encroachment on training ranges, (2) DOD's efforts to document the effect on readiness and cost, and (3) DOD's progress in addressing encroachment. Encroachment was reported as having affected some training range capabilities, requiring workarounds--or adjustments to training events--and sometimes limiting training, at all stateside installations and major commands GAO visited. GAO has identified similar effects abroad. Encroachment generally limits the time that training ranges are available and the types of training conducted. This in turn limits units' ability to train as they would fight. Most encroachment issues are caused by population growth and urban development. Because both are expected to increase, as are the speed and range of weapon systems used on training ranges, the problems are also expected to increase. Despite DOD--voiced concerns about encroachment's effects on training, service readiness data in 2002 did not show the impact of encroachment on training readiness or costs, although DOD's most recent quarterly report to Congress on readiness did tie a training issue directly to encroachment. While individual services are making some assessment of training requirements and limitations imposed by encroachment, comprehensive assessments remain to be done. Likewise, complete inventories of training ranges are not yet available to foster sharing of ranges on an interservice or joint basis. This increases the risk of inefficiencies, lost time and opportunities, delays, and added cost. Also, although some services have reported higher costs because of encroachment-related workarounds for training, service data systems do not capture the costs comprehensively. DOD has made some progress in addressing individual encroachment issues, such as implementing some short-term actions, proposing legislation to clarify the relationship between training and conservation statutes, and issuing a range sustainment directive. But more is required for a comprehensive plan, as recommended by GAO earlier, that clearly identifies steps to be taken, goals and milestones to track progress, and required funding. |
Pursuant to the Homeland Security Act of 2002, as amended, DHS has responsibility for the protection of the nation’s critical infrastructure. Within DHS, the Office of Infrastructure Protection is responsible for critical infrastructure protection and resilience and leads the coordinated national effort to mitigate risk to the nation’s critical infrastructure, which includes working with public and private sector infrastructure partners.The Office of Infrastructure Protection also has the overall responsibility for coordinating implementation of the NIPP across the 18 critical infrastructure sectors; overseeing the development of Sector-Specific Plans; providing training and planning guidance to SSAs and asset owners and operators on protective measures to assist in enhancing the security of infrastructure within their control; and helping state, local, tribal, territorial, and private sector partners develop the capabilities to mitigate vulnerabilities and identifiable risks to their assets. Within the Office of Infrastructure Protection, IASD manages the NCIPP. According to DHS, the main goals of the NCIPP are to (1) identify the infrastructure that if disrupted or destroyed could significantly affect the nation’s public health and safety, economic, or national security; (2) increase the accuracy of infrastructure prioritization efforts used to inform DHS resource allocation decisions; and (3) focus planning, foster coordination, and support preparedness efforts for incident management, response, and restoration activities among federal, state, and private sector partners. Critical infrastructure identified through the program includes several thousand level 1 or level 2 assets and systems. The levels are used to enhance decision making related to infrastructure protection and can include a range of businesses or assets in a local geographic area, such as refineries, water treatment plants, or commercial facilities, as well as the information and data systems that ensure their continued operation. Consistent with the generally voluntary critical infrastructure protection approach identified in the NIPP, according to DHS, the success of the NCIPP relies upon the voluntary contributions and cooperation of public and private sector partners from the infrastructure protection community. To compile the NCIPP list, consistent with statutory requirements, IASD conducts a voluntary annual data call to solicit nominations to the list from state homeland security and federal partners. To submit nominations, partners are to develop realistic scenarios for infrastructure that meet specific criteria developed by IASD. Consistent with the consequence categories identified in the NIPP risk management framework, NCIPP nominations are to meet minimum specified consequence thresholds outlined in the annual data call for at least two of the following four categories: fatalities, economic loss, mass evacuation length, and degradation of national security. After nominations are submitted, according to DHS guidance, IASD conducts a multiphase adjudication process intended to give state and federal partners the opportunity to review IASD’s preliminary decisions and submit additional information to support nominations that were not initially accepted, before IASD finalizes the NCIPP list. The NCIPP list is used to establish risk management priorities. According to the NIPP, prioritizing risk management efforts provides the basis for understanding potential risk mitigation benefits that are used to inform The NCIPP list, which identifies planning and resource decisions.nationally significant critical infrastructure based on consequences, informs the NIPP risk management prioritization process. The NIPP risk management prioritization process involves analyzing risk assessment results to determine which critical infrastructure faces the highest risk so that management priorities can be established. The NCIPP list is also used to, among other things: Allocate Homeland Security Grants. Within DHS, FEMA uses the number of assets included on the NCIPP list, among other data, in its risk formula for allocating State Homeland Security Program (SHSP) and UASI grant funds. The SHSP and UASI provide funding to states and cities, respectively, to support a range of preparedness activities to prevent, protect against, respond to, and recover from acts of terrorism and other catastrophic events. While the number of critical infrastructure a state or city has on the NCIPP list is used to determine the allocation of SHSP and UASI grant funds, there is no requirement that states or cities use these grant funds to enhance protection of these assets. For fiscal year 2012, FEMA allocated $294 million in SHSP funding to all 50 states, the District of Columbia, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Additionally, in fiscal year 2012, FEMA allocated approximately $490 million in UASI funding to the nation’s 31 highest-risk cities. Prioritize Voluntary Critical Infrastructure Protection Programs. The Office of Infrastructure Protection’s Protective Security Coordination Division (PSCD) uses the NCIPP list and other inputs to prioritize its efforts to work with critical infrastructure owners and operators and state and local responders to (1) assess vulnerabilities, interdependencies, capabilities, and incident consequences, and (2) develop, implement, and provide national coordination for protective programs. Related to these efforts, PSCD has deployed the aforementioned PSAs in 50 states and Puerto Rico to locations based on population density and major concentrations of critical infrastructure. PSAs use the NCIPP list to prioritize outreach to level 1 and level 2 assets in their area of jurisdiction for participation in DHS’s voluntary security survey and vulnerability assessment programs, such as the Enhanced Critical Infrastructure Protection and Site PSAs are also often called upon by state Assistance Visit programs.homeland security advisers to assist them in nominating assets to the NCIPP list. Inform Incident Management Planning and Response Efforts. DHS uses information collected during the NCIPP process and the NCIPP list to inform and prioritize incident management planning and response efforts. When an incident occurs, DHS officials pull information from a variety of sources, including the database of assets nominated to and accepted on the NCIPP list, to identify critical infrastructure in the affected area. IASD then prioritizes this information in an infrastructure of concern list to guide incident response efforts. The infrastructure of concern list includes any critical infrastructure affected by the event, which may include level 1 or level 2 assets.DHS components, including FEMA and PSAs, who use it on the ground to guide local incident response efforts. DHS has made several changes to its criteria for including assets on the NCIPP list. These changes initially focused on introducing criteria to make the lists entirely consequence based, with subsequent changes intended to introduce specialized criteria for some sectors and assets. DHS’s changes to the NCIPP criteria have changed the composition of the NCIPP list, which has had an impact on users of the list. However, DHS does not have a process to identify the impact of these changes on users nor has it validated its approach for developing the list. DHS’s initial approach for developing the NCIPP list differed by asset level. According to the Homeland Security Act, as amended, DHS is required to establish and maintain a prioritized list of systems and assets that the Secretary determines would, if destroyed or disrupted, cause national or regional catastrophic effects. The criteria for level 1 assets focused on consequences—the effects of an adverse event. The criteria for level 2 assets focused generally on capacity—the number of people that use an asset or output generated by an asset, such as the number of people that occupy a commercial office building, the daily ridership of a mass transit system, or the number of people served by a water utility. DHS officials told us that the level 1 consequence-based criteria and thresholds were initially established at the beginning of the program at the discretion of the Assistant Secretary for Infrastructure Protection, who sought to identify infrastructure that the destruction of which could be expected to cause impacts similar to those caused by the attacks of In contrast, the initial level 2 September 11 and Hurricane Katrina.criteria were generally capacity based in order to identify the most critical assets within each of the 18 sectors. However, the capacity-based criteria often differed by sector, making it difficult to compare criticality across sectors and therefore identify the highest-priority critical infrastructure on a national level. In 2009, DHS changed the level 2 criteria to make the NCIPP list entirely consequence based, a change that brought its approach more into line with statutory requirements and, consistent with the NIPP risk management framework, allowed for comparison across sectors. The new level 2 criteria match the level 1 consequence-based criteria— fatalities, economic loss, mass evacuation length, or national security impacts—but with lower threshold levels than those used to identify level To be included on the NCIPP list, an asset must meet at least 1 assets.two of the four consequence thresholds, and is included on the list as either level 1 or level 2 depending on which consequence thresholds it meets. As figure 1 shows, the level 1 thresholds are higher than level 2 thresholds and therefore represent the most nationally critical assets. According to officials and agency documents, DHS changed the level 2 criteria to be consequence based for several reasons. First, NCIPP program officials stated that they changed the criteria to align the list with statutory requirements. Specifically, DHS interpreted the statute’s requirement that it identify assets that “would, if destroyed or disrupted, cause national or regional catastrophic effects,” as a call for consequence-based criteria.of assets prioritized using capacity-based criteria demonstrated that the initial level 2 criteria were not sufficient to fully identify assets capable of causing catastrophic events. Second, program officials stated that they changed the criteria to allow for comparisons across sectors, which is consistent with the NIPP. The NIPP states that using a common approach with consistent assumptions and metrics increases the ability to make comparisons across sectors, different geographic regions, or different types of events. Third, DHS also changed the criteria to improve the utility of the list. According to the NCIPP guidance, prior to 2009, assets designated as level 2 on the list experienced instability—assets being added and removed from year to year—which frustrated efforts to use the list for risk management planning and engagement, while assets designated as level 1 on the list—which had always been consequence based—remained relatively stable year to year. Foot-and-mouth disease (FMD) is a highly contagious viral disease of cloven-hoofed animals such as cattle, swine, and sheep. Infected animals develop a fever and blisters on their tongue and lips, and between their hooves. Many animals recover from a FMD infection, but the disease leaves them debilitated and causes losses in meat and milk production. FMD does not have human health implications. According to the U.S. Department of Agriculture, a 2001 outbreak of FMD in the United Kingdom resulted in the slaughter of millions of animals and economic losses conservatively estimated at $14.7 billion. See GAO, Homeland Security: Actions Needed to Improve Response to Potential Terrorist Attacks and Natural Disasters Affecting Food and Agriculture, GAO-11-652 (Washington D.C.: Aug. 19, 2011). DHS is currently reevaluating the agriculture and food sector-specific criteria because, according to officials, the specialized criteria created a great deal of inconsistency in the agriculture and food assets and systems included on the NCIPP list year to year. In 2010, DHS also made adjustments to the NCIPP criteria to account for high-risk assets that may not always meet the consequence criteria by introducing the Catastrophic Economic Impacts Project and the Threats to Infrastructure Initiative. Under the Catastrophic Economic Impacts Project, infrastructure that meets only the level 1 consequence threshold for economic impact, but no other criteria, is added to the list as a level 2 asset. DHS officials explained that the project was added to account for instances when economic impact may be the primary impact. For example, the officials noted that a collapse of the U.S. financial system would likely not cause a large number of prompt fatalities or evacuations, but would cause catastrophic national impacts nonetheless. Meanwhile, the Threats to Infrastructure Initiative allows infrastructure that has received a specific, credible threat from a malicious actor, but otherwise would not meet NCIPP list criteria, to be added to the list as a level 2 asset. Unlike the other NCIPP criteria, the Threats to Infrastructure Initiative focuses on the threat to infrastructure rather than the consequences that may result from a specific event, which could complicate comparisons across assets and sectors. DHS officials told us that infrastructure with specific and credible threats were always included on the NCIPP list, but were historically added based on information from the intelligence community. The addition of the initiative allowed states to nominate critical infrastructure under the same scenario based on state and local intelligence information, such as that collected by fusion centers. According to DHS officials, they adjudicate Threats to Infrastructure Initiative nominations by determining whether the threat to an asset is specific and credible. As of fiscal year 2012, approximately 60 assets and systems have been added to the NCIPP list as a result of these new criteria. In 2009, DHS also changed the format of the NCIPP list by expanding the type of infrastructure that could be nominated to the list to include clusters and systems of critical infrastructure in an effort to characterize the relationship among some infrastructure, such as dependencies and interdependencies, which was consistent with the statute and NIPP. According to the NCIPP guidance, clusters or systems of critical infrastructure are made up of two or more associated or interconnected assets or nodes that can be disrupted through a single event, resulting in regional or national consequences that meet the NCIPP criteria thresholds. An asset is a single facility with a fixed location that functions as a single entity (although it can contain multiple buildings or structures) and meets the NCIPP criteria by itself. A node is a single facility, similar to an asset, that does not meet the NCIPP criteria individually but does meet the criteria when grouped with other nodes or assets in a cluster or system. Figure 2 provides an illustration of an asset, a node, a cluster, and a system. Because nodes do not meet the NCIPP criteria on their own, they are not included on the NCIPP list, but are identified on a separate list that is associated with the NCIPP list. For example, a group of nodes or assets making up a cluster would be listed on the NCIPP list under the name of the cluster, such as the ABC Cluster, but one would have to consult the associated list of nodes to identify the specific facilities that make up the listed cluster. The concept of clusters and systems is consistent with the statute and NIPP risk management framework. The law states that the prioritized list of critical infrastructure shall contain both systems and assets included in the national asset database, and the NIPP states that to the extent possible, risk assessments should assess the dependencies and interdependencies associated with each identified asset, system, or network. According to DHS, they recognized a need to identify clusters of critical infrastructure in 2008 after Hurricanes Gustav and Ike damaged a group of refineries that resulted in a nationally significant supply disruption of certain petrochemicals used across a wide range of industries. The changes DHS made to the NCIPP criteria in 2009 and 2010 changed the number of assets on and the composition of the NCIPP list. The total number of assets, clusters, and systems on the NCIPP list decreased from more than 3,000 in fiscal year 2009 to fewer than 2,000 in fiscal year 2011. The introduction of clusters and systems resulted in a separate list of thousands of nodes associated with the NCIPP list. Specifically, more than 2,500 additional facilities were included on the first nodes list in fiscal year 2011, and almost 4,000 facilities were included on the nodes list for fiscal year 2012. Figure 3 shows the relative changes in the number of assets, clusters, and systems on the NCIPP list and associated nodes list for fiscal years 2007 through 2012. Additionally, the criteria changes also resulted in a change in the distribution of assets, clusters, or systems included on the NCIPP list by sector. Figure 4 shows that, among other sectors, the distribution of assets in the agriculture and food and defense industrial base sectors experienced large increases as a percentage distribution of the list from fiscal years 2009 to 2011, while for the same period, the energy and transportation sectors experienced large decreases. It also shows that the distribution of assets in the agriculture and food sector continued to increase as a percentage distribution of the list from fiscal years 2011 to 2012, while for the same period, the chemical sector experienced a large decrease. Our analysis shows that changes to the NCIPP list can have an impact on users of the list, specifically, FEMA’s allocation of UASI grant funds and PSAs’ ability to prioritize outreach and conduct site visits for its protection programs. Our analysis of the FEMA risk formula shows that a change in the number of NCIPP-listed assets located in a city has an impact on a city’s relative risk score. Our analysis also shows that current UASI grant allocations are strongly associated with a city’s current relative risk Therefore, a change in the number of NCIPP-listed assets score.located in a city can have an impact on the level of grant funding it receives.approximately $490 million in UASI grant funds to the 31 cities with the highest relative risk scores out of 102 eligible cities nationwide. Our analysis of FEMA’s risk formula showed that, at the minimum, if the number of level 2 assets is increased or decreased by as few as two for each city, it would change the relative risk score for 5 of the 31 cities that received fiscal year 2012 UASI grant funding. Such a change could result in increased or decreased grant funding allocations for the affected cities. The changes in the relative risk scores tend to affect cities in the middle to the bottom of the top 31 list because there is generally a larger gap between the relative risk scores of those cities at the top of the list than those in the middle to bottom of the list. However, even a small change in grant funding could have an impact on a city, especially if that city does not traditionally receive other federal assistance as compared with cities with higher risk scores. For example, in fiscal year 2012, FEMA allocated We previously reported that changes to the NCIPP list have presented challenges to managing DHS programs, particularly the voluntary security survey and assessment programs managed by PSCD. In May 2012, we reported that PSCD was unable to track the extent to which it conducted security surveys and vulnerability assessments on NCIPP level 1 and level 2 assets because of (1) inconsistencies between the databases used to identify the high-priority assets and to identify surveys and assessments completed, and (2) the change in the format and organization of the NCIPP list that converted some assets previously listed as level 1 or level 2 into a cluster or system.fiscal year 2012 NCIPP list, DHS has begun to assign unique numerical identifiers to each NCIPP asset, cluster, and system, which officials told us has helped DHS track how many security surveys and vulnerability assessments it conducts on high-priority assets. The officials also told us that they anticipate fewer challenges associated with the list since the Beginning with the number of assets, clusters, and systems on the NCIPP list has remained relatively stable from fiscal years 2011 to 2012. However, as discussed earlier, the number of nodes associated with the NCIPP list has increased substantially, growing from more than 2,500 in fiscal year 2011 to almost 4,000 in fiscal year 2012, which could further challenge PSA’s ability to conduct outreach and prioritize site visits to critical infrastructure for its protection programs. PSCD officials in Washington, D.C. further told us that they do not have criteria establishing how PSAs should assess an NCIPP cluster or system that may contain many different nodes. The number of nodes in an NCIPP cluster or system can vary from two to several dozen and may be geographically dispersed. For example, one PSA told us that nodes in the same cluster may not have the same owner and could be part of a multistate system. Another PSA said that because several nodes in a system may not be the same (i.e., different types of facilities, different facility owners, or located in different areas), he generally conducts an assessment of each node in order to consider an assessment of a system complete. He explained that the facilities would have to be identical in order to conduct a single assessment for separate nodes, which he noted is rarely the case. Because it is difficult to prioritize which nodes within clusters or systems may be the most important for conducting assessments, the increase in the number of nodes associated with the NCIPP list could have the effect of complicating PSA efforts to conduct outreach to and assessments on the nation’s highest-priority infrastructure. PSCD officials told us they view this as a challenge, but they do not characterize it as a significant challenge. Further, they stated that while the treatment of nodes within NCIPP clusters or systems has not been specifically addressed in current program policies or guidance, they do not believe that this challenge has affected their ability to effectively prioritize facilities to receive security surveys and assessments. In January 2013, a PSCD official told us that PSCD is considering new guidance that would clarify how PSAs should approach nodes when conducting outreach or prioritizing visits for voluntary protection programs. DHS does not have a process for identifying the impact of changes to the list on its users and has not reviewed the impact of these changes on users. However, program officials told us that they work closely with the primary users of the list to understand how the data are used. According to officials, they recognize that changes to the NCIPP list may have an impact on users of the list, but they consider these impacts to be minor. For example, one program official told us that the changes in the number of level 1 and level 2 assets rarely have a significant effect on the amount of grant funding allocated to states or cities, because of the additional inputs considered in the FEMA risk formula that determine the grant allocations. However, as previously demonstrated through our analysis, even small changes to the NCIPP list counts can have an impact on UASI grant allocations when accounting for all of the additional inputs considered in FEMA’s risk formula. The officials also recognized that changes from the fiscal year 2009 to fiscal year 2011 NCIPP lists, which significantly reduced the number of assets on the list, required PSCD to reset its performance metrics for conducting its voluntary security survey and assessment programs. However, officials told us that the assets on the NCIPP list have remained relatively stable since fiscal year 2011; therefore, the officials believe that changes to the list would have a minor impact on PSAs’ outreach activities. While our analysis shows that the number of assets on the NCIPP list remained fairly constant from fiscal year 2011 to 2012, it also shows that the number of nodes on the associated nodes list continued to grow and almost doubled during this time. As discussed, the increase in nodes may complicate PSA efforts to conduct outreach to and assessments on the nation’s highest-priority infrastructure. Additionally, the officials told us that, internally, changes to the NCIPP list do not have an impact on DHS’s ability to identify and prioritize critical infrastructure during an incident because the list is just one of many information sources they consult when developing an event- specific infrastructure of concern list to guide incident response efforts. While the change to an entirely consequence-based list created a common approach to identify infrastructure and align the program with the statute and NIPP, recent and planned criteria changes to accommodate certain sectors and assets represent a departure from this common approach, which could hinder DHS’s ability to compare infrastructure across sectors. For example, the agriculture and food sector has criteria that are different from those of all other sectors. Furthermore, DHS has not validated its approach to developing the list to ensure that it accurately reflects the nation’s highest-priority critical infrastructure. The NIPP calls for risk assessments—such as NCIPP efforts—to be complete, reproducible, documented, and defensible to produce results that can contribute to cross-sector risk comparisons for supporting investment, planning, and resource prioritization decisions. Table 1 provides a description of these core criteria for risk assessments. DHS could not provide documentation explaining how the threshold levels were established, such as the methodology for developing the NCIPP criteria or the analysis used to support the criteria, because, according to agency officials, the agency undertook an information technology change in the spring of 2012 that resulted in the loss of agency e-mails and program documentation. Nevertheless, as previously noted, officials told us the criteria and thresholds were established at the discretion of the Assistant Secretary for Infrastructure Protection. Program officials noted that they review the list on an annual basis but that the list has not been independently verified and validated by an external peer review. These officials believe a peer review would enable DHS to determine whether its efforts to develop the NCIPP list are based on analytically sound methodology and whether it has appropriate procedures in place to ensure that the NCIPP list is defensible and reproducible. We have previously reported that peer reviews are a best practice in risk management and that independent expert review panels can provide objective reviews of complex issues. An independent peer review to validate the NCIPP criteria and list development process would better position DHS to reasonably assure that, consistent with the NIPP risk management framework, federal and state partners that use the NCIPP list have sound information when making risk management and resource allocation decisions. According to the NIPP, having sound information for making those decisions is critical for focusing attention on those protection and resiliency activities that bring the greatest return on investment. In August 2012, NCIPP program officials told us they would like to establish a peer review to validate the program because officials believe the list has stabilized and now consider the program to be in a “maintenance phase.” In December 2012, the program director told us that IASD drafted and submitted a proposal to the Assistant Secretary for Infrastructure Protection in November 2012 that proposed different approaches for reviewing the NCIPP, including a peer review of the criteria used to decide which assets and systems should be placed on the list and the process for doing so. At that time, DHS officials said that they could not provide a copy of the draft proposal because it had not been approved by management. As of January 2013, IASD told us that the proposal had not been submitted to the Assistant Secretary for Infrastructure Protection as originally discussed, that it was unclear when the proposal would be submitted, and that it remained uncertain whether a peer review would be approved. The National Research Council of the National Academies has also recommended that DHS improve its risk analyses for infrastructure protection by validating the models and submitting them to external peer review. According to the council, periodic reviews and evaluations of risk model outputs are important for transparency with respect to decision makers. These reviews should involve specialists in modeling and in the problems that are being addressed and should address the structure of the model, the types and certainty of the data, and how the model is intended to be used. Peer reviews can also identify areas for improvement. As we have previously reported, independent peer reviews cannot ensure the success of a model, but they can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process. Thus, an independent peer review would better position DHS to provide reasonable assurance that the NCIPP criteria and list development process is reproducible and defensible given the recent and planned changes, and that critical infrastructure protection efforts are being prioritized on the nation’s highest-priority infrastructure as intended by the NIPP risk management framework. DHS has taken various actions to work with states and SSAs, consistent with statutory requirements and the NIPP, to identify and prioritize critical infrastructure. However, officials representing selected states and SSAs have mixed views about their experiences adjusting to DHS’s changes to the NCIPP. DHS recognizes that states, in particular, face challenges— such as resource and budgetary constraints—associated with nominating assets to the NCIPP list, and has taken actions to address these challenges and reduce the burden on states. In recent years, DHS has taken actions to improve its outreach to states and SSAs to obtain their input on changes to the NCIPP. In 2009, DHS’s outreach to states and SSAs consisted of issuing a memorandum to obtain input on the proposed change to consequence-based criteria. Since 2009, DHS has taken various actions to address state nomination challenges and to reduce the burden on states. For example, in 2009, DHS revised its list development process to be more transparent and provided states with additional resources and tools for developing their NCIPP nominations. Specifically, once states submit their NCIPP nominations, DHS is to make preliminary adjudication determinations based upon the NCIPP criteria, then provide its preliminary adjudication results (whether a nomination was accepted or not) and why the decision was made. Next, DHS is to allow states an opportunity to request reconsideration of the nomination for which they could provide additional documentation clarifying the eligibility of the infrastructure. Figure 5 shows the revised NCIPP list development process, including the nomination, adjudication, and reconsideration phases. In addition to revising the adjudication process, DHS took several actions intended to improve the nomination process in recent years. First, according to DHS’s 2011 data call guidance, DHS provided on-site assistance from subject matter experts to assist states with identifying infrastructure and disseminated a lessons learned document providing examples of successful nominations to help states improve justifications for nominations. Second, DHS has taken action to be more proactive in engaging states and SSAs in ongoing dialogue on proposed criteria changes and improving the NCIPP process and resulting list. For example, in 2010, DHS hosted the Food and Agriculture Criticality Working Group established through the Food and Agriculture Sector Government Coordinating Council—consisting of over 100 participants (including DHS, states, and SSAs)—to discuss the aforementioned modification of the criteria to make it more applicable to the agriculture and food sector. As discussed earlier, DHS and its state and SSA partners are currently reevaluating the agriculture and food sector-specific criteria, and the SSAs held a meeting in December 2012 to discuss updating and adding additional criteria. In addition, in July 2011, DHS established a working group composed of state and SSA officials to solicit feedback on the nomination process and recommend actions to improve the quality of the NCIPP list in preparation for the 2013 data call. DHS officials told us that much of the feedback received from states and SSAs centered on DHS improving communication and guidance throughout the data call—for example, updating the guidance with additional information on criteria. DHS also planned long-term studies, such as requesting input from partners on modifying criteria thresholds. DHS officials told us that they conducted extensive outreach to states and SSAs to encourage participation in the NCIPP working group including extending the submission deadlines multiple times, funding an on-site meeting with the partners, and hosting webinars and conference calls. However, according to DHS officials, DHS has since disbanded the working group because of lack of state and SSA participation and DHS budget constraints. Despite DHS’s outreach efforts, homeland security officials representing selected states and SSAs have mixed views on the NCIPP nomination process because of program changes, such as the aforementioned change to consequence-based criteria. Overall, the SSA officials we interviewed had more positive views of the NCIPP nomination process than the state officials we interviewed. SSA officials representing five of the eight sectors we interviewed told us that they believe it is very easy or moderately easy to nominate assets to the NCIPP list. However, officials representing three sectors said that they believe it is moderately difficult or very difficult to nominate assets to the list because of various factors. For example, one SSA official told us that the diversity and complexity of the sector’s assets makes it difficult to determine which assets meet the NCIPP criteria. Also, one SSA official stated that the online tool that the SSA uses to nominate assets to the NCIPP list requires detailed information, such as latitude and longitude coordinates, that may not be available for assets with unique characteristics. By contrast, most state officials we contacted reported that it is difficult to nominate assets to the NCIPP list using the consequence-based criteria, and two officials said that they are considering whether to continue to participate in the NCIPP process. Homeland security officials representing 13 of the 15 states told us that they believe that the nomination process is moderately difficult or very difficult, while officials representing 2 states told us that they believe the nomination process is neither easy nor difficult. For the 13 states where officials told us that they believe the nomination process is moderately difficult or very difficult, officials representing 5 states told us that not having the capability and resources to develop scenarios to support consequence-based criteria (such as conducting economic analysis) are the major factors contributing to the time-consuming and difficult process of submitting nominations when the criteria changed. Officials from 2 states told us that their states no longer plan to nominate infrastructure to the NCIPP list because of the time and effort required to make nominations. DHS officials told us that they recognize that some states are facing challenges participating in the NCIPP program (as we previously identified in our discussions with state officials) and, according to officials, they are working to help them address some of these challenges. For example, DHS officials said that they recognized that the change to consequence-based criteria was difficult because it required states to invest considerable resources to make nominations. However, they also believe that other factors may influence states’ willingness to participate, such as (1) some state officials may believe that all critical infrastructure has been captured for the state and sector, (2) some state officials may believe that the benefits of participating—such as access to grant funding—have diminished and there is no longer an incentive to participate, and (3) the NCIPP data call process is voluntary and state partners do not have to participate if they do not wish. DHS has taken several steps to minimize the burden on state partners. First, DHS is conducting a more limited annual data call wherein all assets identified on the previous list are generally carried forward onto the subsequent list and states are asked to provide nominations of (1) critical infrastructure not accepted during the previous data call or (2) critical infrastructure not previously nominated but that partners believe merits consideration. In fiscal year 2013, 13 state or territorial partners participated in the data call. DHS officials question whether, given current budget constraints facing state and federal partners, there is a need to conduct an annual data call. In our past work, we have reported that, with our nation facing serious, long-term fiscal challenges, a reevaluation of federal agencies’ operations has never been more important than it is Consistent with our past work, DHS officials told us that they today.considered whether the costs of conducting an annual data call outweigh the benefits, since only minor updates are being made to the NCIPP list. In addition, one state official observed that, in a resource-constrained environment, states can no longer afford to conduct the NCIPP data call because it diverts resources from critical infrastructure protection partnership and coordination activities that could increase state and regional resilience, such as states maintaining their own list of high- priority critical infrastructure. In response, according to DHS officials, DHS is working to minimize major changes to the consequence-based NCIPP criteria, and thus, does not anticipate making any major changes to the NCIPP criteria that would cause a burden on state resources. Finally, DHS officials also told us that they have begun to take additional actions to enhance state participation, including developing and organizing a webinar with PSAs and state officials as they execute the data call. DHS is also working collaboratively with the State, Local, Tribal and Territorial Government Coordinating Council to develop a guide to assist states with their efforts to identify and prioritize their critical infrastructure. DHS has prepared documents describing the national asset database and the prioritized critical infrastructure list; however, DHS could not verify that it has delivered these documents for purposes of meeting its statutory requirement to report this information to the congressional committees specified in the law. Pursuant to the 9/11 Commission Act, which amended title II of the Homeland Security Act, DHS is required to report annually to the Committee on Homeland Security and Governmental Affairs of the Senate and the Committee on Homeland Security of the House of Representatives on, among other things, any significant challenges in compiling the database or list and, if appropriate, the extent to which the database or list has been used to allocate federal funds to prevent, reduce, mitigate, or respond to acts of terrorism.Although DHS was able to compile documents on the database and list for fiscal years 2008 through 2011 that generally contain the information on which DHS is to report, officials from DHS and the Office of Infrastructure Protection told us they were uncertain whether the documents were delivered to the requisite congressional committees because they do not have records to indicate that the documents were delivered. According to a DHS official, the DHS document tracking system includes notes on the intended delivery of the fiscal year 2008 and 2010 documents and a note regarding delivery of the fiscal year 2009 document, but the system does not contain a record to verify that the documents were delivered, i.e., that the transactions actually occurred. Staff from both committees could not find evidence of the documents. One staff member also conducted a search of congressional archives for the 109th, 110th, and 111th Congresses and found no records of receiving the statutorily required reports from DHS. We reviewed the DHS documents intended to fulfill the statutory reporting requirements for fiscal years 2008 through 2011 and found that they generally contain information consistent with the statutory requirements. For example, the documents generally included an overview of the NCIPP list development process and changes, if any, from the previous year; challenges compiling the list; and how the list is used. Table 2 shows key elements of each document and how they match up with the statutory requirements. Nevertheless, absent an approach to verify the delivery of the statutorily required reports on the database and list to the requisite committees of Congress, DHS cannot ensure that it has provided the committees with necessary information in a timely manner. The Standards for Internal Control in the Federal Government calls for compliance with applicable laws and regulations and for the accurate, timely, and appropriate documentation of the transactions. An approach to verify the timely delivery of required reports to the requisite committees of Congress, such as documenting or recording the transactions, would better position DHS to ensure that it is in compliance with its statutory reporting requirements, thereby providing the committees information needed to perform oversight. DHS efforts to identify and prioritize infrastructure continue to evolve, and the department has taken important actions to focus its prioritization approach on consequences, consistent with statutory requirements and the NIPP risk management framework. However, in recent years, DHS introduced new criteria for select sectors and non-consequence-based criteria to account for some assets, which could hinder DHS’s ability to compare assets across sectors in order to identify the nation’s highest- priority critical infrastructure. Given the magnitude of the changes DHS has made to the criteria for including infrastructure on the list, validation of the NCIPP list development approach could provide DHS managers and infrastructure protection partners more reasonable assurance that the list captures the highest-priority infrastructure that, if destroyed or disrupted, could cause national or regional catastrophic effects. NCIPP program officials told us they would like to have the NCIPP reviewed to validate the criteria used to decide which assets and systems should be placed on the list, but they have not yet submitted a proposal for this review to the Assistant Secretary for Infrastructure Protection. An independent, external peer review would better position DHS to provide reasonable assurance that its approach is reproducible and defensible, and that infrastructure protection efforts are being prioritized on the nation’s highest-priority critical infrastructure as intended by the NIPP risk management framework. Finally, it is unclear if DHS has met statutory annual reporting requirements regarding the NCIPP lists because DHS is unable to verify the delivery of these required reports. As a result, DHS cannot ensure that it is fulfilling its statutory reporting obligations and may not be providing the requisite congressional committees with the information needed to effectively oversee the program, particularly with regard to the allocation of scarce federal resources. To better ensure that DHS’s approach to identify and prioritize critical infrastructure is consistent with the NIPP risk management framework and that DHS is positioned to provide reasonable assurance that protection and resiliency efforts and investments are focused on the nation’s highest-priority critical infrastructure, we recommend that the Assistant Secretary for Infrastructure Protection, Department of Homeland Security, take the following action: commission an independent, external peer review of the program with clear project objectives for completing this effort. To ensure that DHS is in compliance with its statutory reporting requirements and provides decision makers with the information necessary to perform program oversight, we recommend that the Secretary of Homeland Security, take the following action: develop an approach, such as documenting or recording the transaction, to verify the delivery of the statutorily required annual reports on the database and list to the requisite congressional committees. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In its written comments reproduced in Appendix III, DHS agreed with both of our recommendations. With regard to our first recommendation that DHS commission an independent, external peer review of the program with clear project objectives for completing this effort, DHS stated that a peer review would enable DHS to determine whether the NCIPP list is based on analytically sound methodology and whether appropriate procedures are in place to ensure that the list is defensible and reproducible. Specifically, DHS stated that it plans to commission and complete an independent peer review of the NCIPP process by the end of the fourth quarter of fiscal year 2014. If fully implemented, to include a review by independent experts to validate the criteria and process DHS uses to decide which assets and systems should be placed on the NCIPP list as we described in this report, DHS’s planned efforts will address the intent of this recommendation. With regard to our second recommendation that DHS develop an approach, such as documenting or recording the transaction, to verify the delivery of the statutorily required annual reports on the database and list to the requisite congressional committees, DHS stated that it has a system in place to track the development and approval of congressional reports, but DHS confirmed that it does not currently have a standard procedure for verifying that the congressional reports are delivered. DHS stated that its Office of Legislative Affairs will develop and implement a standard operating procedure for tracking the delivery of annual reports on the database and the list. DHS did not provide an estimated completion date for this effort. If fully implemented, DHS’s planned efforts will address the intent of this recommendation. DHS also provided technical comments which we incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Under Secretary of the National Programs Protection Directorate, selected congressional committees, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Stephen L. Caldwell at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This appendix provides information on the 18 critical infrastructure sectors and the federal agencies responsible for sector security. The National Infrastructure Protection Plan (NIPP) outlines the roles and responsibilities of the Department of Homeland Security (DHS) and its partners—including other federal agencies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 18 critical infrastructure sectors. Homeland Security Presidential Directive/HSPD-7 and the NIPP assign responsibility for critical infrastructure sectors to sector-specific agencies (SSA). On February 12, 2013, the President issued Presidential Policy Directive/PPD-21 that, among other things, reduced the number of critical infrastructure sectors from 18 to 16. As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 11 of the 18 critical infrastructure sectors. The remaining sectors are coordinated by eight other federal agencies. Table 3 lists the SSAs and their sectors as they existed before any reorganization of the critical infrastructure sectors affected by the issuance of PPD-21. To address our first objective—determine the extent to which DHS changed its criteria for developing the National Critical Infrastructure Prioritization Program (NCIPP) list, identified the impact, if any, of these changes, and validated its approach—we reviewed the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act), which, by amending title II of the Homeland Security Act of 2002, required the Secretary of DHS to establish and maintain a national database of systems and assets determined to be vital and the loss, interruption, incapacity, or destruction of which would have a negative or debilitating effect on the economic security, public health, or safety of the United States, any state, or any local government, or as otherwise determined appropriate for inclusion by the Secretary. In addition, the 9/11 Commission Act required the Secretary of DHS to establish and maintain a single prioritized list of systems and assets included in the national database that the Secretary determines would, if destroyed or disrupted, cause national or regional catastrophic effects. We also reviewed DHS guidelines issued to states and SSAs from 2007 through 2012 that included details on the NCIPP list development process, to determine how DHS’s criteria and process for developing the list changed year to year. We then obtained and analyzed the NCIPP lists finalized for fiscal years 2007 through 2012 to determine the total number of high-priority assets by state and the change in distribution of high- priority assets by sector year to year. We used our analysis to select 8 of the 18 sectors—the banking and finance, defense industrial base, chemical, energy, transportation systems, agriculture and food, government facilities, and dams sectors. We chose these sectors to obtain a mix of sectors that (1) experienced the largest and smallest percentage change in the distribution of assets on the NCIPP list between fiscal years 2009 and 2011 because of program changes DHS made during this period, and (2) have an SSA located within or outside DHS. The information from our analysis of these sectors is not generalizable to the universe of all sectors. However, it provides valuable insights into yearly changes in the distribution of assets on the NCIPP list among a diverse group of sectors. On February 12, 2013, the President issued Presidential Policy Directive/PPD-21 that, among other things, reduced the number of critical infrastructure sectors from 18 to 16. To assess the reliability of the data, we reviewed existing documentation about DHS’s data system, which houses the data application used to create the NCIPP list, and spoke with knowledgeable agency officials responsible for maintaining the system and data application. While we determined that the data were sufficiently reliable to provide a general overview of the program, we included data limitations from our previous work in this report, where appropriate. We also interviewed officials in the Infrastructure Analysis and Strategy Division (IASD), which is part of the Office of Infrastructure Protection in DHS’s National Protection and Program Directorate, who are responsible for managing the NCIPP to identify DHS’s rationale for changing the criteria. In addition, to address the first objective, we reviewed our prior reports as well as DHS Inspector General reports on protection and resiliency prioritization efforts and spoke with program officials who use the list from DHS’s Protective Security Coordination Division (PSCD), the Federal Emergency Management Agency (FEMA), and the Federal Bureau of Investigation to determine how they use the NCIPP list and the impact changes to the NCIPP list have had, if any, on their ability to use the list during fiscal years 2007 through 2012. In addition to interviewing program officials from PSCD headquarters, we also conducted interviews with nine of DHS’s protective security advisors (PSA)—one from each of the nine PSA regions—to discuss their contributions to the NCIPP list, how they use the list to prioritize their activities, and actions NCIPP management has taken to solicit their feedback regarding the program.The results from our interviews are not generalizable to the universe of PSAs but provide specific examples of how PSAs use the list and insights on the effect changes have had on their activities. Although the FEMA UASI grant formula is the same as the FEMA State Homeland Security Program (SHSP), we focused our sensitivity analysis on the UASI grant because this grant is allocated to only a subset of the nation’s 100 most populous urban areas— referred to as metropolitan statistical areas (MSA)—each year, whereas by law, each state and territory are required to receive a minimum allocation of the SHSP funds each year. For ease of reporting, we will refer to UASI grant recipients as cities rather than MSAs. for these 31 cities. We then re-ran the risk formula using these revised NCIPP level 2 infrastructure counts, while holding all other data inputs constant, which resulted in a change to the relative risk score rankings for 5 of the top 31 cities. We also performed additional statistical analysis of the FEMA risk formula and data that showed UASI grant allocations are strongly associated with a city’s current risk score, even when accounting for the influence of the previous year’s grant allocations. Based on our prior work with the FEMA UASI grant risk formula and interviews with FEMA officials about its data sources and quality assurance procedures, we determined that the data were sufficiently reliable for the purposes of this report. Last, we met with IASD officials to discuss actions they have taken to identify the impact of changes, if any, on users of the list, and compared these actions with applicable criteria in the NIPP and Standards for Internal Control in the Federal Government to determine if they were consistent. Regarding our second objective—to determine the extent to which DHS worked with states and SSAs to develop the NCIPP list—we reviewed relevant provisions of the 9/11 Commission Act and the guidelines DHS issued to state homeland security advisers and SSAs to solicit nominations of high-priority infrastructure for inclusion on the NCIPP list. We also conducted interviews with officials from 10 SSAs and 15 state homeland security offices to obtain federal and state perspectives on DHS’s change to consequence-based criteria and coordination of the NCIPP program, as well as their views on nominating to and using the list. The SSA officials we interviewed represented the 8 sectors selected during our analysis for the first objective. Specifically, DHS was the SSA for 4 of the sectors—the chemical, dams, government facilities, and transportation systems sectors. The Departments of Energy, Defense, and the Treasury were the SSAs for 3 sectors—the energy, defense industrial base, and banking and finance sectors, respectively. Two SSAs, the Department of Agriculture and the Food and Drug Administration, share responsibility for the agriculture and food sector. The state homeland security officials we interviewed represented 15 states—California, Georgia, Illinois, Hawaii, Oklahoma, Maine, Mississippi, Nevada, New Jersey, New York, Texas, Virginia, Washington, West Virginia, and Wisconsin. We selected these states because they contained a range in the number of assets on the NCIPP The list and represented at least 1 state from each of 9 PSA regions.sector and state interviews are not generalizable to the universe of infrastructure sectors and states contributing to the NCIPP list. However, our selection combined with DHS policy guidance, further informed us about DHS efforts to manage the NCIPP program across a spectrum of states and partners nationwide. Finally, we interviewed IASD officials to discuss actions DHS had taken to consult with state and federal partners (as identified in program guidelines and based on our interviews with states and SSAs), and compared their responses with applicable criteria in the NIPP, Standards for Internal Control in the Federal Government, and relevant statutory provisions. With regard to our third objective—determine the extent to which DHS reported to the requisite committees of Congress on the NCIPP—we reviewed the statutory requirement that DHS report annually to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Homeland Security on the national asset database and prioritized critical infrastructure list. We also spoke to staff members representing both committees to determine if the committees received the statutorily required reports. Last, we interviewed DHS officials to discuss efforts to provide these reports to the committees and obtained and reviewed documents on the national asset database and prioritized critical infrastructure list that were intended to meet statutory reporting requirements to determine if these efforts were consistent with relevant statutory provisions and Standards for Internal Control in the Federal Government. We conducted this performance audit from May 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John F. Mortin, Assistant Director, and Andrew M. Curry, Analyst-in-Charge, managed this assignment. Chuck Bausell, Mona Nichols-Blake, Aryn Ehlow, Katherine M. Davis, Michele C. Fejfar, Eric D. Hauswirth, Mitchell B. Karpman, Thomas F. Lombardi, and Janay Sam made significant contributions to the work. Critical Infrastructure Protection: Preliminary Observations on DHS Efforts to Assess Chemical Security Risk and Gather Feedback on Facility Outreach. GAO-13-421T. Washington, D.C.: March 14, 2013. Critical Infrastructure Protection: An Implementation Strategy Could Advance DHS’s Coordination of Resilience Efforts across Ports and Other Infrastructure. GAO-13-11. Washington, D.C.: October 25, 2012. Critical Infrastructure Protection: Summary of DHS Actions to Better Manage Its Chemical Security Program. GAO-12-1044T. Washington, D.C.: September 20, 2012. Critical Infrastructure Protection: DHS Is Taking Action to Better Manage Its Chemical Security Program, but It Is Too Early to Assess Results. GAO-12-567T. Washington, D.C.: September 11, 2012. Critical Infrastructure: DHS Needs to Refocus Its Efforts to Lead the Government Facilities Sector. GAO-12-852. Washington, D.C.: August 13, 2012. Critical Infrastructure Protection: DHS Is Taking Action to Better Manage Its Chemical Security Program, but It Is Too Early to Assess Results. GAO-12-515T. Washington, D.C.: July 26, 2012. Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments. GAO-12-378. Washington, D.C.: May 31, 2012. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. | In October 2012, Hurricane Sandy caused widespread damage across multiple states and affected millions of people. Threats to critical infrastructure are not limited to natural disasters, as demonstrated by the terrorist attacks of September 11, 2001. Originally developed by DHS in 2006, and consistent with the Implementing Recommendations of the 9/11 Commission Act of 2007, the NCIPP identifies and prioritizes nationally significant critical infrastructure each year. However, Members of Congress and some state officials have raised questions about changes DHS has made to its approach for creating the list and the impact of these changes. GAO was asked to review DHS management of the program. GAO assessed the extent to which DHS has (1) changed its criteria for developing the list, identified the impact, if any, of these changes, and validated its approach, (2) worked with states and SSAs to develop the list, and (3) reported to Congress on the NCIPP. GAO, among other things, reviewed laws, DHS policies and procedures; analyzed the lists from 2007 through 2012; and interviewed DHS, SSA, and state homeland security officials selected based on their involvement with the program and geographic diversity. The interviews are not generalizable but provide insights. The Department of Homeland Security (DHS) has made several changes to its criteria for including assets on the National Critical Infrastructure Prioritization Program (NCIPP) list of the nation's highest-priority infrastructure, but has not identified the impact of these changes or validated its approach. In 2009, DHS changed the criteria to make the list entirely consequence based--that is, based on the effect of an event on public health and safety, and economic, psychological, and government mission impacts. Subsequent changes introduced specialized criteria for some sectors and assets. For example, infrastructure that has received a specific, credible threat, but otherwise does not meet NCIPP criteria, may be included on the list. DHS's changes to the NCIPP criteria have changed the composition of the NCIPP list, which has had an impact on users of the list, such as the Federal Emergency Management Agency. However, DHS has not reviewed the impact of changes on users nor validated its approach to developing the list. While the change to an entirely consequence-based list created a common approach to identify infrastructure and align the program with applicable laws and the National Infrastructure Protection Plan, recent criteria changes to accommodate certain sectors and assets represent a departure from this common approach, which could hinder DHS's ability to compare infrastructure across sectors. Program officials noted they would like to validate the NCIPP, but they have not yet submitted a proposal to DHS management. An independent peer review--a best practice in risk management--would better position DHS to reasonably assure that the NCIPP list identifies the nation's highest-priority infrastructure. To develop the list, DHS has consulted with both states and sector specific agencies (SSA)--federal agencies responsible for protection and resiliency efforts among individual critical infrastructure sectors, such as energy, transportation, and dams. Since changing the NCIPP criteria in 2009, DHS has taken proactive steps to help states nominate assets to the list. These steps include providing on-site assistance, minimizing changes to the criteria, conducting outreach to encourage participation in an NCIPP working group (which includes SSAs), and providing explanations of why nominated assets do not make the list. DHS recognizes that states, in particular, face challenges--such as resource and budgetary constraints--associated with nominating assets, and has taken actions to address these challenges and reduce the burden on states. GAO could not verify that DHS is meeting statutory requirements to report annually to the Committee on Homeland Security and Governmental Affairs of the Senate and the Committee on Homeland Security of the House of Representatives on the NCIPP list. DHS officials prepared documents that generally contained information consistent with statutory reporting requirements, but they were uncertain whether they had been delivered to the committees because they do not have records to verify they were delivered. An approach to verify the delivery of the required reports, such as documenting or recording the transactions, would better position DHS to ensure that it is in compliance with its statutory reporting requirements and that it provides the committees with the information needed to perform oversight of the program. GAO recommends that DHS commission an external peer review and develop an approach to verify that the annual reports are provided to the requisite committees of Congress. DHS concurred with the recommendations. |
Since 2004, Congress has authorized over $8 billion for medical countermeasure procurement. The Project BioShield Act of 2004 authorized the appropriation of $5.6 billion to be available from fiscal year 2004 through fiscal year 2013 for the Project BioShield Special Reserve The act Fund, and funds totaling this amount were appropriated.facilitated the creation of a government countermeasure market by authorizing the government to commit to making the Special Reserve Fund available to purchase certain medical countermeasures, including those countermeasures that may not yet be approved, cleared, or licensed by the Food and Drug Administration (FDA). In 2013, PAHPRA authorized an additional $2.8 billion to be available from fiscal year 2014 through fiscal year 2018 for these activities, and $255 million was appropriated in fiscal year 2014. Congress has also made funding available through annual and supplemental appropriations to respond to influenza pandemics, including developing vaccines and other drugs. HHS is the primary federal department responsible for public health emergency planning. Within HHS, several offices have specific responsibilities for medical countermeasure development and procurement. HHS’s ASPR leads PHEMCE and the federal medical and public health response to public health emergencies, including strategic planning and support for developing and securing medical countermeasures. As part of these activities, HHS develops priorities for which medical countermeasures are needed. Within ASPR, BARDA—established by the Pandemic and All-Hazards Preparedness Act of 2006—coordinates and supports advanced research and development, manufacturing, and initial procurement of medical countermeasures for CBRN threats, pandemic influenza, and emerging infectious diseases into the Strategic National Stockpile—the national repository for medications, medical supplies, and equipment for use in a public health emergency. As part of these responsibilities, BARDA oversees HHS’s efforts to develop flexible manufacturing capabilities for medical countermeasures. HHS’s PHEMCE, which was established in 2006, is composed of officials from ASPR, BARDA, the Centers for Disease Control and Prevention (CDC), FDA, and the National Institutes of Health (NIH), in addition to officials from other federal departments, including the Departments of Agriculture, Defense, Homeland Security, and Veterans Affairs. In 2007, HHS published the PHEMCE Implementation Plan, which identified HHS’s priorities for CBRN countermeasure procurement using the 2004 Special Reserve Fund appropriation. In December 2012, HHS published an updated PHEMCE Implementation Plan, which describes the capabilities HHS wants to establish to support countermeasure development and procurement, including activities to support flexible manufacturing. The 2012 PHEMCE Implementation Plan also identifies HHS’s priorities for developing and procuring medical countermeasures, such as anthrax vaccine, smallpox antivirals, chemical agent antidotes, and diagnostic devices for radiological and nuclear agents. (See app. I for HHS’s advanced development priorities for CBRN countermeasures.) Flexible manufacturing generally refers to the equipment and technologies that allow a facility to rapidly develop or manufacture a number of products simultaneously or in quick succession. These technologies include the use of disposable equipment, such as growing cell cultures in disposable plastic bag systems rather than in stainless steel tanks that require more time to clean and sterilize prior to the next use and the use of modular sterile rooms to allow for the manufacture of multiple products simultaneously within a given facility. Other technologies include alternatives to more traditional methods of making influenza vaccine, such as using cell-based or recombinant technologies to make vaccine, rather than the traditional egg-based technology, or using adjuvants to enhance the immune response to vaccines. In addition to alternative vaccine development technologies, platform technologies provide flexible systems that have the potential to produce medical countermeasures for multiple threats. The use of flexible manufacturing technologies also has the potential to help provide surge capacity production in a public health emergency. We previously reported on the barriers industry faces in developing and manufacturing CBRN and pandemic influenza medical countermeasures, which create challenges for HHS. In April 2011, we found that the barriers HHS identified in the PHEMCE review continued to exist. Specifically, we found that the lack of a commercial market continued to hinder large pharmaceutical companies from developing medical countermeasures. As a result, less-experienced biotechnology companies became the primary developers of such products, but these companies needed more scientific and regulatory assistance for testing the safety and efficacy of their countermeasures in development. In its 2010 PHEMCE review, HHS stated that new approaches to vaccine manufacturing, such as the use of flexible manufacturing technologies, offered promising ways to meet the demands of pandemic vaccine production while simultaneously meeting needs related to other public health emergency threats. In our June 2011 review, HHS officials told us that the CIADMs are intended to support countermeasure developers by providing needed resources for and expertise about manufacturing and to reduce the technical risks of researching and developing medical countermeasures. In addition, HHS officials indicated that such assistance by the CIADMs could reduce the research and development costs of smaller, less-experienced companies. In fiscal years 2012 and 2013, HHS’s BARDA awarded nearly $440 million to establish its CIADMs and a network of facilities to provide packaging support to ready the product for distribution, known as the Fill Finish Manufacturing Network. The CIADM contractors are required to develop three activities to support flexible manufacturing: pandemic influenza surge capacity, core services for CBRN medical countermeasure developers, and workforce training programs. According to BARDA officials, the Fill Finish Manufacturing Network will supplement the CIADMs’ pandemic influenza surge capacity and CBRN core services activities. HHS’s BARDA awarded approximately $400 million in fiscal year 2012 to three contractors to establish the CIADMs. Under the terms of the CIADM contracts, the three contractors must retrofit existing facilities or build new ones to incorporate flexible, innovative manufacturing equipment and technologies that can be used to develop and manufacture more than one medical countermeasure either simultaneously or in quick succession. BARDA characterizes the CIADMs as public-private partnerships because the contractors are required to provide their own funds to supplement those awarded by HHS under a cost-sharing arrangement. For example, the total investment in pandemic influenza vaccine surge capacity could include up to $194 million in contractor funding to supplement the $400 million government award amount, for a total of about $594 million in public and contractor funding. An option is a unilateral right in a contract by which, for a specified time, the government may elect to purchase additional supplies or services called for by the contract, or may elect to extend the term of the contract. CIADMs are required to design, construct, and commission their facilities. These facilities are intended to establish a warm base for pandemic influenza surge capacity. A warm base refers to facilities that, once constructed and commissioned, would be operationally ready to quickly manufacture vaccine during an influenza pandemic. These facilities are also intended to establish the capacity to provide core services for the development of CBRN countermeasures. (See table 1 for information on the CIADM base period amounts, including the government award and contractor cost-share.) Contractors may be awarded additional amounts beyond the base period award through the issuance of task orders. Under the CIADM contracts, HHS may issue task orders to purchase (1) core services for CBRN medical countermeasure developers, (2) medical countermeasure vaccine production (including vaccine for pandemic influenza), and (3) workforce training activities. The contracts outline the procedures that HHS is to follow to give contractors a fair opportunity to be considered for the award of task orders. BARDA anticipates issuing task orders in the three service areas, including core services for CBRN countermeasures, during the annual option periods. As shown in Table 1, option periods may overlap the base period for the contracts. The filling and finishing of medical countermeasures refers to the process by which individual drugs are packaged for use, such as in vials and syringes, and includes labeling, patient instructions, outside packaging, transport, and promotional materials. contract amount is intended to fund the necessary up front activities (e.g., formulation and technology transfer) to establish warm base facilities that can be used to provide fill and finish services during both pandemic and nonpandemic periods. After the contractors have completed these start-up activities to establish the fill and finish network, BARDA plans to award additional funding through the issuance of task orders. These task orders may include funding for materials, spare parts, equipment, staffing, and fees necessary to complete the task order. BARDA’s CIADMs are intended to provide three activities—surge capacity for manufacturing pandemic influenza vaccine, core services for the development of CBRN medical countermeasures, and workforce training—to support HHS’s flexible manufacturing activities. According to HHS, the primary goal of the CIADMs is to provide core service assistance to CBRN medical countermeasure developers, their ability to provide some core services depends on the retrofitting of existing, or building of new, facilities that are also needed to provide surge capacity. The Fill Finish Manufacturing Network is to supplement the CIADMs’ pandemic influenza surge capacity and CBRN core services activities. The three CIADMs are required under their contracts with BARDA to establish surge capacity to quickly manufacture influenza vaccine in a pandemic and secure a pandemic influenza vaccine candidate currently under development. The CIADMs plan to establish surge capacity as follows: Emergent: Under the CIADM award, Emergent is to design, construct, and commission a biologics development and manufacturing suite in Baltimore, Maryland, intended to support core services for CBRN medical countermeasures on a routine basis and support manufacturing of medical countermeasure vaccines for an influenza pandemic or other public health threats. In addition, Emergent is to design, renovate, and commission a pilot plant at its existing facility in Gaithersburg, Maryland, that is also intended to support core services for CBRN medical countermeasure developers. Novartis Vaccines and Diagnostics (Novartis). Under the CIADM award, Novartis is to design, renovate, and commission a pilot plant to produce and fill clinical investigational lots of CBRN medical countermeasures in its existing plant in Holly Springs, North Carolina. Also, Novartis is to design, construct, and commission a technical services building in Holly Springs, North Carolina, to house administrative staff and provide maintenance services for the pilot plant. Texas A&M University System (TAMUS). Under the CIADM award, TAMUS is to design, construct or renovate, and commission a number of facilities on the Texas A&M campus in College Station, Texas. These facilities are to include a biologics development and manufacturing facility that is intended to provide core services for CBRN medical countermeasures, with the added capability of developing and manufacturing live virus vaccine candidates; a current Good Manufacturing Practices vaccine bulk manufacturing facility dedicated to large-scale surge manufacturing of pandemic influenza vaccines; a laboratory and office building to support process development and technology transfer of CBRN medical countermeasures into the CIADM; and a facility to support the fill and finish requirements for medical countermeasures. The establishment of the TAMUS fill and finish facility is being funded under the CIADM contract and is not a part of BARDA’s Fill Finish Manufacturing Network, for which HHS issued separate contracts. Each of the CIADMs has taken a different approach to acquiring pandemic influenza vaccine candidates: Emergent has partnered with VaxInnate, which is developing a pandemic influenza vaccine using recombinant protein technology. Novartis has developed a pandemic influenza vaccine candidate using cell-based vaccine production, which involves growing flu viruses in mammalian cell cultures instead of the conventional method of making influenza vaccine in chicken eggs. TAMUS has partnered with GlaxoSmithKline to obtain a pandemic influenza vaccine candidate. GlaxoSmithKline plans to grow the vaccine using a proprietary line of cells. A vaccine using the same adjuvant received FDA approval in November 2013 for pandemic response purposes. According to BARDA officials, FDA licensed the vaccine, using this adjuvant, to be manufactured in Canada using egg-based technology. However, the TAMUS CIADM is using GlaxoSmithKline’s cell-based influenza vaccine technology to meet HHS surge manufacturing requirements. The CIADMs are scheduled to have completed construction, acquired an influenza pandemic vaccine candidate, and validated their vaccine surge capacity with FDA by the end of their contract base period (2020, 2016, and 2017, respectively for Emergent, Novartis, and TAMUS). Each of the three CIADMs are to be able and, in the event of an influenza pandemic, be required to produce 50 million doses of vaccine within four months of receipt of the influenza virus strain, with the first doses for the public available to HHS within 12 weeks. BARDA officials told us that they anticipate that at least one CIADM would be able to manufacture pandemic influenza vaccine upon request starting in 2017, and that all of the centers would be capable of manufacturing pandemic influenza vaccine by the end of 2020. BARDA anticipates placing task orders for pandemic influenza vaccine, if needed, during the annual contract option periods available to extend the contracts at the end of the respective base periods. Once the CIADMs’ influenza vaccine surge capacity is operational, the centers are expected to maintain readiness for surge manufacturing, even in nonpandemic periods. According to BARDA officials, in these nonpandemic periods, the CIADMs may use their surge capacity for other activities, including commercial manufacturing, provided they make their influenza vaccine surge capacity available upon request from HHS during an influenza pandemic to produce the required 50 million doses in the specified time period. While surge capacity at the CIADMs is intended for pandemic influenza vaccine production, BARDA officials told us this capacity could be used to manufacture other medical countermeasures, such as an anthrax vaccine, in a public health emergency. BARDA officials told us that based on FDA requirements to maintain the license for the pandemic influenza vaccine, the CIADMs may need to produce one annual lot of the vaccine. BARDA will provide payment for activities required to maintain pandemic readiness. According to BARDA officials, the four companies that were awarded contracts to establish the Fill Finish Manufacturing Network will provide additional fill and finish surge capacity in an influenza pandemic to supplement the CIADMs and allow for the fill and finish of 117 million additional doses of pandemic influenza vaccine in 12 weeks. The companies in the Fill Finish Manufacturing Network are encouraged to collaborate with the three CIADMs as well as partner with domestic influenza vaccine manufacturers in order to transfer the fill and finish technology into the Fill Finish Manufacturing Network contractors’ facilities, which will become alternate locations on the vaccine manufacturers’ licenses for fill finish activities. The network is also expected to provide its services to HHS for production of clinical investigational lots of medical countermeasures that are in development. BARDA anticipates that the Fill Finish Manufacturing Network will be available to receive task orders for core services by the end of fiscal year 2014. For the core services activity, the CIADMs are to provide services for the development and production of CBRN medical countermeasures, such as assisting CBRN medical countermeasure developers in manufacturing small amounts of products that can be used in clinical trials. In the CIADM request for proposals, BARDA outlined a list of core services it expects the CIADMs to provide. (See app. II for a list and description of these core services.) These core services may be provided by the CIADMs directly or by subcontractors. Once the CIADMs are operational, BARDA will issue task orders to the CIADMs for core services using the fair opportunity process outlined in the contracts. For example, BARDA may issue a task order for a CIADM to provide regulatory or technical assistance for a specific CBRN medical countermeasure to a developer with a current BARDA contract. Under the terms of the contracts, the CIADMs are required to make their core services available to HHS for 50 percent of the time, or 6 months per annual contract option period. If HHS does not issue a task order to use a CIADM for core services, or issues a task order for core services for less than 6 months of an annual option period, HHS will provide the CIADM with a facility readiness reimbursement for up to 6 months of that facility’s capacity for that option period. BARDA officials told us that some of the CIADMs may begin providing some core services during 2014, and that each of the CIADMs should be capable of providing each of the core services by the end of 2015. Once the new or retrofitted CIADM facilities are operational, a CIADM may begin providing core services, such as producing sufficient amounts of a specific countermeasure at a small scale to be tested in clinical trials for safety and efficacy. BARDA officials told us that the Fill Finish Manufacturing Network is also intended to provide these fill and finish services to CBRN medical countermeasure developers to supplement the core services provided by the CIADMs. This would be in cases such as when one or more of the CIADMs is at capacity or for countermeasures that may not be eligible for CIADM core services. According to BARDA officials the CIADMs and the Fill Finish Manufacturing Network are part of BARDA’s overall core service assistance programs, which, since 2011, also include an animal studies network and, since 2014, a new clinical studies network to assist developers of CBRN medical countermeasures. For the workforce training activity, the CIADMs are to develop programs to enhance and maintain U.S. capabilities and expertise to develop and produce CBRN medical countermeasures. These workforce training programs are intended to develop a highly-skilled biotechnology and pharmaceutical workforce proficient in bioprocess engineering, production and quality systems, and regulatory affairs. Through these workforce training programs, the CIADMs are to offer training through means such as certificate programs, workshops, industry short courses, and internships. The CIADMs may provide training in subjects such as an introduction to biotechnology, good manufacturing practices procedures and documentation, facility operations and safety, regulatory compliance, and bioprocess control. BARDA officials told us that during the contract base period, the CIADMs are required to develop their workforce training programs, and that the agency may begin to request workforce training activities through task orders in fiscal year 2014. HHS established the CIADMs to provide needed core services to support the development and production through flexible manufacturing of certain CBRN medical countermeasures that were identified as priorities by PHEMCE. The agency followed the recommendation in the PHEMCE review to establish CIADMs capable of providing such core services. However, it is too early to tell how effective this approach will be because HHS has not begun to issue task orders to CIADMs for core services. Of the three flexible manufacturing activities undertaken at the CIADMs, BARDA officials told us that the provision of core services is the primary activity intended to support the development of certain CBRN medical countermeasures. The core services are specifically designed to provide CBRN developers with needed experience, facilities, and technology to help develop and produce certain medical countermeasures that HHS and PHEMCE identified as priorities. According to BARDA, the three CIADM contractors are entities that have experience in developing, manufacturing, and licensing pharmaceutical products in the United States. BARDA officials told us that the core services to be provided by the CIADMs are the types of services that HHS, PHEMCE, and industry representatives identified as necessary. The 2010 PHEMCE review indicated that services such as regulatory support, animal testing, and, if appropriate, clinical trials were needed to help less-experienced countermeasure developers to get through the challenging advanced development phase. Further, the 2012 PHEMCE implementation plan identified, as a programmatic priority, that CIADMs provide experienced biopharmaceutical development staff at the CIADMs to aid in the development of medical countermeasures. Each of the three CIADMs are to provide 24 core services, directly or by subcontract, to assist countermeasure developers in moving their products through advanced development and production. In addition, BARDA officials indicated that each center can provide specific and slightly different expertise in developing products using alternate technologies, such as recombinant proteins or insect cells. For example, Emergent has experience developing products for infectious disease and biodefense. It has developed BioThrax, the only FDA-licensed anthrax vaccine, and has had several medical countermeasure development contracts with U.S. government agencies. Novartis has experience in developing a novel influenza cell culture as well as in other areas, and has an additional contract with BARDA to produce pandemic influenza vaccine. TAMUS is a large university system with access to a network of experienced partners including GlaxoSmithKline and a highly-rated veterinary school. TAMUS officials told us that their flexible manufacturing capabilities include modular “clean” rooms that can be tailored to each biopharmaceutical product’s specifications. According to BARDA officials, the CIADMs are designed to provide developers with access to a variety of core services all in the same facility and the project management experience needed to manage the CBRN medical countermeasure development process. BARDA officials indicated that they envision a countermeasure developer working with a single CIADM on a product’s development. Core services provided by the CAIDMs would have the potential to support only the development of medical countermeasures that are biologics-based, such as vaccines and recombinant proteins, but not small molecule countermeasures, such as antibiotics or antivirals. Examples of biologics-based countermeasures for CBRN threats include anthrax vaccine, recombinant protein chemical antidotes, and products to diagnose or treat the effects of exposure to radiological or nuclear agents. BARDA officials told us that the CIADMs are intended to assist in developing biologics-based countermeasures because a 2008 study commissioned by HHS and DOD examining vaccine manufacturing facility alternatives found that there is a sufficient domestic supply of contract manufacturing organizations that could be called upon in a public health emergency to produce small molecule countermeasures. The CIADMs’ services are intended to support countermeasure developers who have existing contracts with BARDA and countermeasure developers who have contracts with other PHEMCE partners, such as DOD and NIH. based CBRN countermeasure contracts that are eligible, in whole or in part, to receive core services from the CIADMs. BARDA officials indicated that the CBRN medical countermeasures to be developed under these contracts are consistent with the countermeasures identified as HHS priorities in the 2012 PHEMCE implementation plan. For example, the PHEMCE implementation plan identified the development of an anthrax vaccine as a priority, and 4 of the 23 eligible CBRN medical countermeasure projects focus on developing anthrax vaccine. DOD is also developing an advanced development and manufacturing center for medical countermeasure developers. BARDA officials told us that once the DOD facility is built and operational, the HHS and DOD centers’ services will be available under a unified umbrella to provide medical countermeasure development and manufacturing assistance. BARDA has not issued any task orders for core services to date, as the CIADMs are still completing activities associated with the contract base periods. Therefore, it is too early to tell the extent to which countermeasure developers may use CIADM services and how helpful the core services may be to support medical countermeasure development. Under the CIADM contracts, amounts awarded during the contract base period are to fund the construction of physical infrastructure, either the building of new facilities or the retrofitting of existing ones, and other preparations necessary to provide core services to countermeasure developers. As such, the base period of the contract provides a framework to help support countermeasure development, but no direct provision of core services. After the CIADM contractor establishes this framework, BARDA is to award task orders to CIADMs to provide core services to countermeasure developers. Because the CIADMs have not yet completed base period activities, BARDA has not yet issued task orders to provide core services. BARDA officials told us that two CIADMs may be able to provide core services as soon as 2014, a year earlier than planned. According to BARDA officials, once each of the CIADMs have completed construction or retrofitting, so that there is sufficient space to conduct core service activities, BARDA will evaluate and confirm the technical capabilities and capacity of each CIADM to provide core services prior to issuing task orders for these services. Once the CIADMs are operational, BARDA and other agencies that participate in PHEMCE are to select eligible countermeasure development projects for those developers who want to access the CIADMs and issue task orders for core services. In order to select eligible contracts and issue task orders, HHS and PHEMCE have created a CIADM steering committee consisting of senior level officials from BARDA, CDC, FDA, NIH, and DOD. HHS has completed documents that provide governance for this process: a signed charter for the steering committee, preliminary criteria for selecting eligible contracts, and a signed governance document describing how the process will operate. Under the process, the steering committee issues a data call, and in response, medical countermeasure project managers from BARDA, NIH, and DOD are to submit proposals for current medical countermeasure contracts that would benefit from core services provided by the CIADMs to the CIADM steering committee.review the proposals and select the countermeasure projects and developers to which it will offer access to the CIADMs’ core services. Next, HHS plans to issue task order requests for each selected project, and the CIADMs will be required to submit proposals in response to the task order requests. Finally, according to BARDA officials, BARDA plans to issue a task order to the CIADM contractor whose proposal best satisfies the selection factors for award under the task order. BARDA officials told us that the CIADM steering committee met in January 2014 and plans to meet at least semiannually. The steering committee is to then While it is too early to tell how effective HHS’s approach to providing core services to CBRN medical countermeasure developers through the CIADMs will be, some industry stakeholders we interviewed expressed concerns about demand, availability of funding, and communication with BARDA. For example, some stakeholders questioned whether there would be a sufficient number of countermeasure developers who need advanced development support and who might choose to receive those services from the CIADMs. BARDA officials told us that they have conducted surveys of developers with current BARDA contracts about their interest in receiving core services from the CIADMs. As a result, according to officials, BARDA anticipates having a greater demand for core services than the CIADMs will be able to supply. Additionally, industry stakeholders we spoke to expressed concern that insufficient funding for task orders may affect the success of the CIADMs. BARDA officials told us that funding for task orders will either come from BARDA’s budget for specific medical countermeasures, or from other agencies, such as NIH, through interagency agreements, but that the availability of funds for specific development projects would play a role in deciding which projects would receive core services. BARDA officials told us that they expect to have sufficient funding for task orders in fiscal years 2014 and 2015. Some industry stakeholders that we talked to also indicated that BARDA has not yet provided detailed information to industry partners about how countermeasure developers will request and use core services from the CIADMs. BARDA officials told us that BARDA featured the CIADMs and explained CIADM operations at its November 2013 Industry Days. At this time, the eligible countermeasure developers are only those who have current development contracts with BARDA, NIH, and DOD. We provided a draft of this report to HHS, and its comments are reprinted in appendix III. In its comments, HHS acknowledged that it is too early to determine whether the Centers are meeting their prescribed goals because their intended core service activities have not yet begun. However, HHS noted that the CIADMs are nearly a year ahead of schedule in completing construction and ramping up activities in anticipation of providing services once HHS begins issuing task orders in 2014. HHS also noted that the CIADMs are a new model for public- private partnerships, and represent one component of BARDA's comprehensive, integrated approach to supporting advanced research and development, innovation, acquisition, and manufacturing of countermeasures for public health emergency threats. In addition to its overall comments, HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Developing the appropriate requirements for growing cells upstream (media) and preparing ingredients for downstream purification (buffers) In addition to the contact named above, Sheila K. Avruch, Assistant Director; Matt Byer; Britt Carlson; Shana R. Deitch; Cathy Hamann; and Tracey King made significant contributions to this report. National Preparedness: HHS is Monitoring the Progress of Its Medical Countermeasure Efforts but Has Not Provided Previously Recommended Spending Estimates. GAO-14-90. Washington, D.C.: December 27, 2013. National Preparedness: Efforts to Address the Medical Needs of Children in a Chemical, Biological, Radiological, or Nuclear Incident. GAO-13-438. Washington, D.C.: April 30, 2013. National Preparedness: Countermeasures for Thermal Burns. GAO-12-304R. Washington, D.C.: February 22, 2012. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington, D.C.: June 27, 2011. Influenza Vaccine: Federal Investments in Alternative Technologies and Challenges to Development and Licensure. GAO-11-435. Washington, D.C.: June 27, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. | Public health emergencies, such as the 2001 anthrax attacks and the 2009 H1N1 influenza pandemic, raise concerns about the nation's vulnerability to threats from CBRN agents and new or reemerging infectious diseases, such as pandemic influenza. HHS is the federal agency primarily responsible for identifying medical countermeasures needed to address the potential health effects from exposure to CBRN agents and emerging infectious diseases. HHS conducted a review to assess how to better address these concerns. Its August 2010 review concluded that the advanced development and manufacture of CBRN medical countermeasures needed greater support. The review recommended that HHS develop centers to provide such support, in part by using flexible manufacturing technologies, such as disposable equipment, to aid in the development and rapid manufacture of products. The Pandemic and All-Hazards Preparedness Reauthorization Act of 2013 requires GAO to examine HHS's flexible manufacturing initiatives and the activities these initiatives will support. This report addresses (1) how much funding HHS has awarded for flexible manufacturing activities for medical countermeasures, and (2) the extent to which these activities will support the development and production of CBRN medical countermeasures. To address these objectives, GAO examined HHS documents and interviewed HHS officials, contractors, and stakeholders. In comments on a draft of the report, HHS agreed with its findings and provided additional information. In fiscal years 2012 and 2013, the Department of Health and Human Services (HHS) Biomedical Advanced Research and Development Authority (BARDA) awarded nearly $440 million in contracts to establish three Centers for Innovation in Advanced Development and Manufacturing (CIADM) and a network of facilities to provide packaging support for medical countermeasure distribution, known as the Fill Finish Manufacturing Network (FFMN). The contracts require the CIADMs to develop three activities to support flexible manufacturing for medical countermeasure development and production: the manufacture of pandemic influenza vaccines during an emergency; core services to support the development and production of chemical, biological, radiological, and nuclear (CBRN) medical countermeasures; and workforce training. During the contract base periods, each CIADM is to retrofit existing or build new facilities able to produce 50 million doses of pandemic influenza vaccine within 4 months of receipt of the influenza virus strain and to establish the capacity to provide core services, such as assisting countermeasure developers by manufacturing products to be used for clinical trials. The CIADMs are also required to develop workforce training programs, which are intended to increase expertise in CBRN medical countermeasure development. The CIADM base contracts are intended to retrofit or build facilities to stand ready to provide these three activities and maintain this readiness through annual contract option periods. Once the facilities are prepared to provide these activities, BARDA may place task orders for provision of CIADM vaccine surge capacity, core services, or training, and BARDA, through the task orders, would provide additional payments to obtain these services. The FFMN is to supplement CIADMs' pandemic influenza surge capacity, packaging up to 117 million doses of pandemic influenza vaccine in 12 weeks, if needed, and can also provide core services as CIADM subcontractors. HHS's CIADM core services activities are designed to support the development and production of certain CBRN medical countermeasures, but it is too early to tell how effective this approach will be. BARDA's establishment of the CIADMs implements a recommendation from HHS's review of the Public Health Emergency Medical Countermeasures Enterprise (PHEMCE)—a federal interagency body that advises HHS on medical countermeasure priorities. The CIADMs are to support the development of biologics-based countermeasures only, which are products like vaccines that are derived from living sources such as cells, because BARDA considers these countermeasures to need the greatest support. BARDA has identified some of its current biologics-based countermeasure development contracts that could use core services' support and are priorities for PHEMCE. However, the CIADMs are still completing activities associated with their contract base period. Thus, BARDA has not issued any task orders for core services to date, but has created a CIADM steering committee and completed guidance to govern the task order process once the CIADMs are operational. Until the CIADM core services are used, it will be unclear how effectively they will support the development and production of CBRN medical countermeasures. Stakeholders we interviewed were uncertain about the demand for and availability of funding for core services. BARDA officials said that they anticipate having sufficient demand for the services and funding for task orders in fiscal years 2014 and 2015. |
Our analysis illustrates one of the difficult choices facing the Congress in crafting comprehensive DB pension reform legislation, including the controversial issues surrounding the legal status of CB plans, and particularly CB conversions. The current confusion concerning CB plans is largely a consequence of the present mismatch between the ongoing developments in pension plan design and a regulatory framework that has failed to adapt to these designs. Although CB plans legally are DB plans, they do not fit neatly within the existing regulatory structure governing DB plans. This mismatch has resulted in considerable regulatory uncertainty for employers as well as litigation with potentially significant financial liabilities. For many workers, this mismatch has raised questions about the confidence they may have in the level of income they expect at retirement, confidence that has already been shaken by the termination of large pension plans by some bankrupt employers. CB plans may provide more understandable benefits and larger accruals to workers earlier in their careers, advantages that may be appealing to a mobile workforce. However, conversions of traditional FAP plans to CB plans redistribute benefits among groups of workers and can result in benefits for workers, particularly those who are longer tenured, that fall short of those anticipated under the prior FAP plan. Our simulations suggest that grandfathering plan participants who are being converted can protect those workers’ expected benefits, and, in fact, such protections, in some form, are fairly common in conversions. Our simulations also show that without such mitigation, many workers can receive less than their expected benefits when converted from a traditional FAP plan, even in cases where the CB plan is of equal cost to the FAP plan it is replacing. As a result, as we noted in our 2000 report, additional protections are needed to address the potential adverse outcomes stemming from the conversion to CB plans. For example, requirements for setting opening account balances could protect plan participants, especially older workers, from experiencing periods of no new pension accruals after conversion while other workers continue to earn benefits. Our simulated comparison of CB plans with the termination of a FAP plan leads to several important observations. First, the immediate vesting of all unvested workers requirement in a plan termination actually leads to a greater number of workers getting some retirement benefits and highlights the portability limitation of DB plans. Workers in an ongoing DB plan only receive benefits if they are vested. Appealing to a mobile workforce would seem to place an even greater significance on pension portability. Yet even CB plans, which often feature lump sum provisions in their design, do not address this issue because they typically have similar vesting requirements as traditional FAPs. In our simulations, vested workers under either a typical or equal cost CB plan still fare better than if the FAP plan is terminated. We note further that some sponsors of CB plans have already exited the DB system, a system that has been declining in sponsorship and participation for several decades now. There is a crucial balance between protecting workers’ benefit expectations with unduly burdensome requirements that could exacerbate the exodus of plan sponsors from the DB system. Congress, as it grapples with the broader components of pension reform, has the opportunity not only to protect the benefits promised to millions of workers and eliminate the legal uncertainty surrounding CB plans that employers face, but also to craft balanced reforms that could stabilize and possibly permit the long-term revival of the DB system. We provided a draft of this report to the departments of Labor, Treasury, and the PBGC. No written comments were provided by these agencies. They did, however, provide technical comments, which we incorporated as appropriate. We plan to provide copies of this report to the Secretaries of the Department of Labor and the Department of Treasury and to the Pension Benefit Guaranty Corporation and interested congressional offices. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this request please contact me at (202) 512-5932. Other major contributors to the report are listed in appendix VI. I. Literature provides few generalizable conclusions, particularly with regard to: why sponsors convert to CB plans the benefit distributional effects of such conversions. II. Analyzed plan conversions show most, but not all: converted accrued benefits into an opening account balance and offered some form of transition provisions. had age and service eligibility restrictions on transition provisions. III. Regardless of age, workers who were converted from an FAP plan to a typical CB plan generally had reductions from expected FAP benefits. A majority of younger workers received larger benefits under a conversion to an equal cost CB plan. Analysis of lifetime benefits under a conversion to an equal cost CB plan does not change basic findings. Vested workers receive larger benefits under a CB conversion of either type compared to benefits received under termination of an FAP. Data and other methodological issues (e.g., sampling methods) limit generalization of results. limit generalization of results. Conversion impact depends on a variety of factors including plan generosity, transition provisions, and firm specific employee demographics. Also, because of the different accrual patterns in a CB plan compared to a FAP plan, for a variety of workers, the impact of a conversion varies. Why sponsors convert to CB plans. Why sponsors convert to CB plans. How participants are likely to fare under a CB plan How participants are likely to fare under a CB plan relative to the traditional DB plan that is being replaced. relative to the traditional DB plan that is being replaced. CB plan conversions have distributional effects on pension Younger, more mobile workers who vest usually benefit. Older workers with long job tenure likely to experience a loss, particularly if they are near age and service requirements for early retirement. There were 2 primary methods for setting the opening account 1. Present Value (PV) of old accrual: account balance is based on accrued benefit at conversion; or 2. A+B: (A) preserves prior benefits as annuities + (B) CB opening balance is $0. Opening account balance depends on a formula that may include factors such as interest rates, employer-added incentives, early retirement benefits, & other assumptions. 23 of 39 plans with data available used conversion interest rates within 1% of the prior month’s 30-year Treasury rate Wearaway may occur when a participant’s hypothetical opening account balance is set at less than the present value of its accrued benefits using 30-year Treasury rate, as specified under the Internal Revenue Code. Transition provisions (e.g., grandfathering, transition pay credits) are important factors in mitigating wearaway. Grandfathering prevents wearaway for participants who continue to accrue benefits under the prior plan formula. and in 55% of the largest converted plans, although most of these provisions had some form of age or service restrictions. Eligibility requirements in plans offering age plus service all employees age or service Age plus service was the method most often used. 62% of all conversions used some form of weighted pay credits (those that increase based on the participant’s age and/or service). 36% of all conversions used level pay credits (those that are a level function of salary). About 42% of large conversions used an age plus service method for providing ongoing pay credits. Weighted pay credits tend to benefit older and longer- tenured workers relatively more than level pay credits. Immediate eligibility and 5-year cliff vesting and normal retirement age 65, early retirement age 55 with 10 years of service with early retirement benefit reduction of 5 percent per year. Immediate disability retirement benefits for those vested, no survivors benefits or joint-and-survivor annuities. Benefits paid as a nominal annuity (i.e., no benefit COLA). Terminal earnings (final pay) is final five-year average. Benefits formula is excess integrated with base rate of 1.5 percent of final pay per year of service and has a rate of 0.45 percent of final pay per year of service for those amounts in excess of the social security maximum. Typical FAP plan design based on prior GAO reports, literature reviews, and discussions with pension actuaries, consultants knowledgeable about DB plans. Immediate eligibility and five-year cliff vesting; base pay credit of 3.0 percent of salary for employee with age-plus-service (APS) ≤ 35. Pay credit rises gradually until it is 6.0 percentage points above the base pay credit for employee with APS ≥ 70. Cash-balance account crediting rate is the Treasury rate. Employee rolls over account balance at separation and earns Treasury rate. Balances converted to nominal single-life annuity at retirement using the Treasury rate and the GAM 83 mortality table adjusted to the pertinent year. Typical CB plan design is based on plans analyzed in GAO’s Form 5500 data, and confirmed by pension actuaries, consultants knowledgeable about CB plans. Same assumptions as the typical CB plan except: Base pay credit of 7.35 percent of salary for employee with age-plus-service (APS) ≤ 35. Pay credit rises gradually until it is 6.0 percentage points above the base pay credit for employee with APS ≥ 70. Equal cost CB plan used for our simulations is: More generous pay credits than virtually all plans in our More generous than those specified in pension research Though not explicitly modeled, to some extent, our equal cost cash balance plan could be considered to implicitly include other enhancements made by employers to other benefits, such as those provided by a DC plan, for example. Regardless of age, all vested workers who converted to a typical CB plan experienced monthly benefit increases compared to a terminated FAP plan. At conversion ages 30, 40, and 50, increases range from $150 per month for conversions at age 30 to $305 per month at age 50. Grandfathered benefits for those eligible under the CB plan greatly impact results shown for older workers.(See figure 7.) Terminated plan benefits are shown for only those participants who were vested in the typical CB plan. Workers Converted to Typical CB Plan from Typical FAP at Earlier Ages Generally Receive Reduced Comparison of lifetime benefits for typical CB plan and typical FAP plan does not change basic findings from monthly benefit comparisons. Regardless of age at conversion, more workers who are converted from a FAP plan to the typical CB plan have lower present value of lifetime benefits. (See figure 8.) Nearly half of workers experiencing a conversion at age 50 are grandfathered in their FAP benefit. Grandfathering protects eligible older workers’ benefits converted to an equal cost CB Plan from a FAP Plan (See figure 9.) More workers who converted from a FAP plan to an equal cost CB at age 30 generally experience monthly benefit increases Increases range from $90 per month for conversions at age 30 to $29 per month for conversions at age 50. (See figure 10.) Reductions range from $75 per month for conversions at age 30 to $128 per month for conversions at age 50. an equal cost CB plan experience benefit gains compared to a terminated FAP. Median increases range from $283 per month for conversions at age 30 to $396 per month for conversions at age 50. Grandfathered benefits for older workers under the CB greatly impact results.(See figure 11.) Terminated plan benefits are shown for only those participants who were vested in the equal cost CB plan. Comparison of lifetime benefits for equal cost CB plan and typical FAP plan consistent with basic findings from monthly benefit comparisons (See figure 12). More workers converted to an equal cost CB plan from a typical FAP at age 30 receive greater present value of lifetime benefits through conversion than would at later conversion ages. Nearly half of workers experiencing a conversion at age 50 are grandfathered in their FAP benefit, while a significant number (41%) of unprotected workers converted at age 50 experience a lower present value of lifetime benefits. Outside of grandfather protections, results show a redistribution of benefits from older workers to younger workers. GAO compiled a comprehensive list of the academic literature on CB pension plans since our last reports on the subject issued in 2000, focusing on those studies that contained original and material empirical work on the issue. After constructing a list of the relevant literature, we eliminated partial or incomplete studies, those that did not contain material empirical work and those that exhibited serious methodological concerns. We then conducted a more detailed review of the remaining studies, including several surveys of CB plans. The review concentrated on the studies’ findings and on the methodological issues that may limit conclusions that can be reached. There is a list of the studies and surveys reviewed for this report at the end of this appendix. Although there are academic studies that attempt to go beyond anecdotal information, the literature remains in its infancy. Data and other methodological issues often limit the conclusions that the empirical studies examining the impact of plan conversions can reach and, the ability to generalize their results. In general, the results of all studies are sensitive to assumptions regarding earnings growth, interest rates, investment returns, and turnover rates. Because some specifics of the simulations presented in some studies do not include sufficient detail, it is difficult to evaluate the quality of the estimates in some cases. Because of the limited availability of data on actual conversions and on the workforce associated with a particular conversion, few empirical studies have the ability to examine actual conversions. Because a range of factors that are unique to each conversion influence the final impact on workers—including demographic characteristics, the transition benefits offered during the conversion and the generosity of the new CB plan relative to the old plan it is replacing—it is difficult to extend the results of the literature to the actual experience of workers. For example, in the conversion to a new plan, a sponsor may eliminate early retirement subsidies—a significant reason why older workers may receive lower benefits. Similarly, some employers may offer transition benefits that can help to ameliorate the adverse effects of plan changes on the more senior segment of the workforce, while others do not. Other studies focus on “hypothetical” or “prototypical” workers instead of actual employees and therefore cannot make definitive statements about many segments of the population or actual workers in the plans analyzed. In addition, the majority of the research simulates the effects of plan conversions on the workforce assuming that the conversion is cost neutral (the cost of the new CB plan is equal to the cost of the old DB plan so that overall pension benefits remain constant). However, some research suggests that the retirement benefit implications due to a shift to a less generous CB plan differ materially from the effects of a cost-neutral conversion. Moreover, several studies were limited to plans that include transition benefits that often ensure that existing workers do not suffer significant losses in pension wealth during plan conversions and exclude pension wealth on previous jobs. Thus, their inclusion/omission may lead to a bias in the empirical findings either in favor or against CB plan designs. Some studies examine only a few plan conversions or rely on assumptions based upon information extracted from the limited surveys discussed below. Since the plans analyzed may not be representative, the outcomes may not generalize to the typical CB conversion or related to the broader workforce. A few widely cited studies which use survey data in an attempt to determine the reasons why employers initiate CB plan conversions contain methodological limitations and base their conclusions on employers’ self perceptions along with additional biases, and cannot be extended beyond the small samples of firms studied. For example, one study is limited by a low response rate (20 percent) and insufficient information about the population and sampling method, survey instrument and its development, while the others raise concerns over the potential for sample bias and/or the additional bias due to the fact that over half of the plans evaluated were those for which the researchers were the primary design consultants. In general, we determined that the results from these surveys may not be representative of the population of CB plan conversions and methodological limitations suggest that the results should be interpreted with caution. Clark, Robert. “Pension Plan Options: Preferences, Choices and the Distribution of Benefits.” Pension Research Council Working Paper, PRC WP. 2003-24. Clark, Robert, and Fred W. Munzenmaier. “Impact of Replacing a Defined Benefit Pension with a Defined Contribution Plan or a Cash Balance Plan.” North American Actuarial Journal, 5 (1). (2003-4): 32-56. Clark, Robert, and Sylvester Schieber. “The Transition to Hybrid Pension Plans in the United States: An Empirical Analysis.” Private Pensions and Public Policies, eds. W. Gale et al. Washington, D.C.: Brookings Institution. 2004. Coronado, Julia, and Philip Copeland. “Cash Balance Pension Plan Conversions and the New Economy.” Federal Reserve Board Working Paper. November 2003. D’Souza, Julia, John Jacob, and Barbara Lougee. Why Do Firms Convert to Cash Balance Pension Plans? An Empirical Investigation. Cornell University, December 2004. Johnson, R.W., and C. Uccello. “Cash Balance Plans and the Distribution of Pension Wealth.” Industrial Relations, 42(4) (2003): 745-773. Mellon Financial Corporation. 2004 Survey of Cash Balance Plans. Secaucus, N.J.: 2004. Niehaus, Greg, and Tong Yu. “Cash-Balance Plan Conversions: Evidence on Excise Taxes and Implicit Contracts.” The Journal of Risk and Insurance, 72(2). 2005. PriceWaterhouseCoopers. Survey of Conversions from Traditional Pension Plans to Cash Balance Plans. July 2000. Purcell, Patrick. Pension Issues: Cash Balance Plans. Washington, D.C.: Congressional Research Service, August 2003. Rao, A., L. Higgins, and S. Taylor. “Cash Balance Pension Plans: Good News or Bad News.” Journal of Applied Business Research, 18 (3). 2002. Samwick, Andrew, and Jonathan Skinner. “How will 401(k) Plans Affect Retirement Income?.” American Economic Review, Vol. 94, No.1. March 2004. Schieber, Sylvester. “The Shift to Hybrid Pensions by U.S. Employers: An Empirical Analysis of Actual Plan Conversions.” Pension Research Council Working Paper, PRC WP. 2003-23. Watson Wyatt Worldwide. The Unfolding of a Predictable Surprise: A Comprehensive Analysis of the Shift from Traditional to Hybrid Plans. 2000. To obtain information about CB plan conversions, we reviewed 2001 Form 5500 data for a random sample of CB plans. We drew this sample from the population of plan sponsors that indicated on their Form 5500 that they sponsored a CB plan. The study population consisted of all CB plans as of 2001 having at least 100 active participants, supplemented with an additional 96 CB plans that were identified by PBGC based on 2002 and 2003 data not yet available to the GAO. For the purpose of this report, we excluded plans having fewer than 100 participants in order to focus on the plans with the greatest number of participants. This resulted in a total of 843 plans in our study population. We used the Form 5500 as our primary source of information for analyzing the prevalence of transition provisions used by plan sponsors when they converted to a CB plan because it was a cost effective way of obtaining conversion information for a large number of plans. It would have been optimal to obtain summary plan descriptions (SPD) from plan sponsors. However, since plan sponsors are no longer required to file SPDs, direct contact with such a large number of plan sponsors would have been cost prohibitive. Although it is the most comprehensive pension data available, using Form 5500 data also presented limitations and weaknesses. We had limited ability to determine the full scope of conversions beyond tax year 2001 since this was the most current and complete 5500 data publicly available from the Department of Labor (Labor) when we began our analysis. In addition, we also had difficulty obtaining Form 5500 filings for some years, particularly from the early 1990s and before. As previously reported by GAO, statutory reporting requirements, processing issues, and current Labor practices affect the timeliness of the release of available Form 5500 information, in some cases, resulting in a 3-year lag between data reporting and its release. In addition, information provided on the form and attachments proved, in some instances, to be inconsistent from one plan sponsor to another. This inconsistency hampered our data collection efforts, and subsequently, we were unable to provide meaningful results on all of the information our data collection instrument was designed to capture. For example, we found that not all plans reported having a lump sum feature for those who separate before retirement although we believed some of those plans did so. In addition, some plans provided extensive details on discount rates and formulas used in their opening account balance calculations while others provided no information. In situations where we could not find information on the form or its attachments, we recorded this as “information not found.” Finally, although the Form 5500 provides information on the number of active participants in the entire plan, it was often impossible to determine how many of those participants were converted to the CB plan in instances where only certain employee groups were converted. Nevertheless, our estimates are based on plan-level data. The sample design for this study was a stratified random sample of CB plans, with the 45 largest plans comprising the first stratum, and an additional 160 plans selected from the remaining plans, producing a total sample of 205 plans. Of these sampled plans, we obtained sufficient plan information for 165, we found 21 plans to be out-of-scope for our study (not CB plans), and for 19 plans we could not obtain sufficient information on the plans. Also, of these 205 sampled plans, 7 plans started a new CB plan only for the new employees, while keeping their existing employees in the traditional DB plan. We did not include these plans in our analysis since they were start-up CB plans and not converted CB plans. This sample disposition information is summarized in table 1. After obtaining Form 5500s, attachments, and summary plan descriptions where available for sampled plans, we recorded plan features on a standardized instrument containing 51 questions designed to capture information about characteristics of the traditional DB plan, such as the conversion date and the type of DB plan in place before the conversion; the conversion such as when it took place, which employees were affected, and the type of transition provisions used; and the ongoing features of the CB plan, such as pay credits and interest credits provided at the time of conversion. Estimates of converted CB plans were based on our sample of CB plans. Estimates for this target population were formed by weighting the survey data to account for both the sample design and the completion rate. Because we surveyed a sample of CB plans, our estimates are subject to sampling errors that are associated with samples of this size and type. A different random sample could produce slightly different estimates. Our confidence in the precision of the results from this sample is expressed in 95 percent confidence intervals. The 95 percent confidence intervals are expected to include the actual results for 95 percent of the samples of this type. We calculated confidence intervals for our study results using methods that are appropriate for a stratified, probability sample. For the percentages presented in this report, we are 95 percent confident that the results we would have obtained if we had studied the entire study population are within ± 9 or fewer percentage points of our results. For example, we estimate that 47 percent of the CB plan conversions offered some form of grandfathering. The 95 percent confidence interval for this estimate would be no wider than ± 9 percent, or from 38 percent to 56 percent. In addition to sampling error, the practical difficulties in conducting sample file reviews of this type may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, or errors could be made in keying questionnaire data. We took several steps to reduce these errors. To minimize some of these errors, each completed data collection instrument was verified for accuracy, and a process of content analysis was undertaken to resolve interpretation differences. We performed 100 percent verification of all keypunched questionnaire data. We also traced and verified the data collection instrument to descriptive statistics and output generated by GAO data analyst staff. In the event of changes, the entire verification process was again performed which included 100 percent verification of the new keypunched data, additional content analysis to verify the change being made, and reverifying the output generated by the data analyst staff. In addition, we were only to record a plan as having a characteristic if evidence of that characteristic was found in the file review. For example, it is possible that some CB plans had transition provisions at conversion that were not clearly indicated in the 5500 files and attachments. We can only conclude that evidence of transition provisions being offered was not found in the 5500 data for this plan. To analyze the effects of a CB plan conversion on individual workers, we used a pension policy simulation model PENSIM. PENSIM is a dynamic microsimulation model for analysis of the retirement income implication of government policies affecting employer-sponsored pensions. The model has been developed by the Policy Simulation Group (PSG) since 1997 with funding by the Office of Policy and Research at the EBSA of the U.S. Department of Labor. To meet GAO’s needs for this project the model includes several enhancements that permit the analysis of CB plan conversions. PENSIM uses discrete event simulation methods to generate a sample of life histories that reflect the effects of individual risks (mortality, disability, earnings, etc.). The likelihood and timing of simulated life events are represented by a variety of probability models, including hazard functions and multinomial logit models that have been estimated using various survey data sets. The timing of job history events and employer pension sponsorship are estimated using longitudinal SIPP data and longer-term longitudinal PSID data. Simulated life histories contain information on educational attainment, disability, mortality, and a complete job history that includes details on earnings and pension accumulation for each job. Details of pension plan(s) covering a worker on a job are assigned using a pension characteristics imputation model, which has been estimated with late-1990s BLS Employee Benefit Survey data. Life histories simulated by PENSIM generate social security benefit and payroll tax results similar to those generated by the Congressional Budget Office’s long-term social security model (CBOLT). PENSIM simulates the pension accruals of employees as they move from job to job over their lifetime and estimates their retirement income from a lifetime of pension coverage. With its CB plan analysis capability, PENSIM can also simulate changes in retirement income caused by conversions from traditional defined benefit pension plans to CB pension plans. PENSIM produces a large random sample of simulated life histories for people born in a given year and for their spouses who may have been born in a different year. For our report, we do not include spousal benefits in the analysis. The members of the birth cohort sample experience demographic and economic events, the incidence and timing of which vary by age, gender, education, disability, and employment status. The types of life events that are simulated in PENSIM include demographic events (birth, death); schooling events (leaving school at a certain age, receiving a certain educational credential); family events (marriage, divorce, childbirth); initial job placement; job mobility events (earnings increases while on a job, duration of a job, movement to a new job, or out of the labor force); pension events (becoming eligible for plan participation, choosing to participate, becoming vested, etc.); and retirement events. This broad scope of simulated life events is necessary in order to simulate lifetime pension accruals with any realism. Three pension plans are used in this study to simulate several kinds of private-sector plan conversions and terminations. The baseline from which the conversion/termination analysis starts is a typical final-pay defined benefit pension plan (“typical FAP”). This typical FAP plan has common private-sector characteristics and a benefit formula that produce an employer cost of providing the pension equal to the average cost of the full variety of final-pay plans observed in BLS Employee Benefit Survey data. The second plan considered in the analysis is a typical CB pension plan (“typical CB”) that has been specified to have characteristics found to be typical of the plans we analyzed in the GAO Form 5500 data collection conducted as part of this study. The third plan is a more generous version of the typical CB pension plan (“equal-cost CB”) that has been constructed to have the same employer cost as the typical FP plan. The typical FAP plan has the following characteristics 5-year cliff vesting; normal retirement age of 65; early retirement age of 55 with 10 years of service with benefits reduced by five percent for each year of early retirement (i.e., fifty percent reduction at age 55); immediate unreduced disability retirement benefit for those who are vested; no survivors’ benefit for those who die on the job; selection of single-life annuity at retirement (no selection of joint and survivor annuity because study ignores survivors’ benefits); benefit paid as a nominal annuity (i.e., no benefit COLA); FAP is the highest consecutive five-year average; and benefit formula is excess integrated with a base rate of 1.5 percent of final pay per year of service and has a rate of 0.45 percent of final pay per year of service for those amounts over the social security maximum. The typical CB plan has the following characteristics 5-year cliff vesting; base pay credit of 3.0 percent of salary for employee with age plus service of less than or equal to 35; pay credit rises gradually until it is 6.0 percentage points above the base pay credit for employee with age plus service greater than or equal to 70 (this results in a maximum pay credit of 9.0 percent of salary); interest credit is calculated using current 30-year Treasury rate; employee always rolls over full account balance into an IRA at job termination; rollover account earns current 30 year Treasury rate each year; account balances are converted to a nominal single-life annuity at retirement using the treasury rate, current projected mortality rates, and projections of future reductions in mortality. An annuity loading fee was used such that it ensures the provider is solvent (i.e., 1.5 percent for women and 3.0 percent for men); at conversion, opening account balance is equal to the statutory present value of accrued benefit under old plan; at conversion, employee with age plus service greater than or equal to 60 is grandfathered in old plan so that benefit at job end can never be lower than it would have been if the old plan had continued operating The equal-cost CB plan has the following characteristics same characteristics as the typical CB plan except the base pay credit is 7.35 percent of salary for employee with age-plus-service (APS) ≤ 35, rather than the 3.0 percent of salary in the typical CB plan, and pay credit rises gradually until it reaches a maximum of 6 percentage points above the base pay credit for employee with age plus service greater than or equal to 70. These three plans are used to simulate the following conversion and termination situations typical CB plan versus typical FAP plan; typical CB plan versus FAP plan that is terminated with no replacement of any kind; equal cost CB plan versus typical FAP plan; and equal cost CB plan versus FAP plan that is terminated with no replacement of any kind. All PENSIM runs conducted for this study simulate a 3 percent sample of the 1955 birth cohort using historical information through the present and 2004 OASDI Trustees Report intermediate-cost assumptions for the future projection. The resulting cohort sample consists of 151,263 individuals born in 1955 either in the U.S. or elsewhere (and immigrated to the U.S. in a subsequent year). The PENSIM runs differ only in their assumptions concerning private- sector sponsorship of the typical FAP plan (which is assumed to be offered by all private-sector employers who are simulated to offer a FAP DB plan) and the typical or equal-cost CB plan (which is assumed to be offered by all private-sector employers who are simulated to offer a CB DB plan). The employment history of each individual and coverage/participation in employer-sponsored DB and DC plans are a key component to determining the lifetime benefits for each individual. Pension benefits accumulated as a result of movement to different employers during a person’s entire work history is included in reported results. Pension coverage across a lifetime may include participation in a variety of DB and DC plans or no coverage at all. Workers who are not covered under either a private sector FAP or a CB pension plan are excluded from the study analysis. Most of the study analysis focuses on those who have vested in at least one private-sector FAP or CB plan. Public-sector FAP plans are assumed to be unchanged across all runs, and all other types of DB plans (i.e., other than FAP or CB) and all types of DC pension plans in all sectors are assumed to be unchanged across all the runs. Additionally, all the PENSIM runs used in this study contain the exact same life histories and job careers for the cohort sample. That is, the only change that takes place in all PENSIM runs is whether the private sector DB plan is a FAP or a CB plan. The simulation analysis provides the following general results about the cohort sample sample individuals who had at least one private-sector FAP or CB pension plan: 57,049 (100.0 percent); sample individuals who never vest in such a plan: 20,274 (35.5 percent); sample individuals who vest in such a plan but die before age 68: 6882 (12.1 percent); sample individuals who vest in such a plan and live to age 68: 29,893 (52.4 percent); of the 29,893, 87.0 percent vest in just one FAP or CB pension plan over their lifetime, while 12.3 percent vest in two plans, and all but three of the rest vest in three such plans; and of the 26,018 who vest in just one FAP or CB pension plan, only 10.2 percent accumulate thirty years or more of service on that job. The study makes four pair-wise comparisons between PENSIM runs: (1) typical CB plan versus ongoing typical FAP plan, (2) equal-cost CB plan versus ongoing typical FAP plan, (3) typical CB plan versus terminated typical FAP plan, and (4) equal-cost CB plan versus terminated typical FAP plan. In each comparison, the difference in lifetime pension income between the two runs is calculated for each sample individual. Lifetime pension income includes all pension benefits earned during a person’s career even if they are unaffected by the assumed change in employer pension sponsorship between the two runs. Lifetime pension income is expressed in one of two ways: the present value of all pension income received over the individual’s lifetime or the monthly pension income received at age 68. In both cases, the monetary amounts are expressed in 2004 dollars. The conversion/termination of the typical FAP plan is assumed to occur at one of eight ages: 25, 30, 35, 40, 45, 50, 55, and 60. The entire cohort sample was put through eight separate simulation runs -- one simulation run for each age. Results are shown for those who were vested in a job that was caught in a conversion. The conversion provisions (opening balance and grandfathering) described above for the typical and equal-cost CB plans were found to be typical in our analysis of the Form 5500 sample drawn for this study. Based on our Form 5500 sample plan analysis and meetings with consultants who are experts on CB plans, there was concurrence that the opening CB would be equal to the present value of accrued benefit under the old plan at the conversion date. The expected present value of the accrued benefit is calculated using a GAM83 mortality table adjusted to the proper year and the current Treasury rate as the discount rate. If eligible for grandfathering, an individual receives the higher of two amounts at job termination: the accumulated CB under the new plan and the expected present value of the benefit the individual would have received if the typical FAP plan had not been converted. The expected present value is calculated using the same mortality and discount assumptions as used in the opening balance calculation. All individuals affected by a conversion or termination are covered under the federal anticutback rules. The PENSIM runs use these same mortality and discount assumptions for the anticutback calculations. The employer cost of sponsoring a pension plan is defined as the percentage ratio of the present value of benefits paid to all individuals who worked on a job where that pension plan was sponsored and the present value of earnings paid to all individuals who worked on a job where that pension plan was sponsored. The present value calculations use Treasury rates to discount both the benefit and earnings cash flows. For a FAP plan, the benefit cash flow is the annuity payment stream. For a CB plan, the benefit cash flow is the CB amount paid at job termination. All employer cost estimates are for the 1955 birth cohort. Using a younger birth cohort would produce a higher employer cost rate for the typical FAP plan because of rising life expectancy and about the same employer cost rate for the typical CB plan because of its earnings-based benefit formula. The estimated employer cost rates are as follows equal-cost CB plan with averaged conversion costs: 7.547 percent. This is an estimate of the ongoing cost of the typical CB plan after all conversion costs have been paid. There are several reasons why the estimated employer cost of the typical CB plan immediately after the conversion of 5.870 percent is below the estimated employer cost of the typical FAP plan of 7.545 percent by about 22 percent. First, the typical FAP plan has been constructed to reflect the full variety of private-sector FAP plans contained in the BLS Employee Benefits Survey data used to impute plan characteristics in PENSIM. The characteristics of the typical CB plan are drawn from the Form 5500 sample used for this study and from discussions with pension experts and actuaries who confirmed that the characteristics were in the range of what they believe was typical for CB plans. This sample of CB plans is the largest available sample, and the only sample to be drawn using statistical sampling methods. The difference in the estimated employer cost rates for these two plans is consistent with prior research. Specifically, the cost difference reported here is somewhat smaller than the cost difference for typical plan conversions reported in a widely cited study by Watson Wyatt Worldwide. In the Watson Wyatt study, the employers who convert typical (i.e., middle of the cost distribution) FAP plans to CB plans—the 20 percent in deciles 5 and 6 in table 9—experience an immediate defined- benefit pension employer cost reduction of about 19 percent (18.72 percent in fifth decile and 19.76 percent in sixth decile). However, the 22 percent cost reduction estimated in this study and the 19 percent cost reduction estimated in the Watson Wyatt study are not comparable because of differences in the Watson Wyatt life history simulations, which ignore disability events, and therefore, underestimate the cost of the FAP plans. To make our estimates comparable to the Watson Wyatt estimates, we subtracted the 0.487 percent disability costs from 7.545 percent yielding a without-disability employer cost estimate for the typical FAP plan of 7.058 percent. Our estimate of the immediate cost of the typical CB plan is 5.870 percent, which is about 17 percent below the 7.058 percent without-disability estimate. This 17 percent immediate employer defined- benefit cost reduction is about the same as the 19 percent reduction found in the Watson Wyatt study. The following staff members made major contributions to this report: Charles A. Jeszeck, Assistant Director, Kimberley M. Granger, Analyst-in- Charge, Joseph Applebaum, Kevin Averyt, Richard Burkard, Virginia Chanley, Tamara Cross, David Eisenstadt, Lawrence Evans Jr., Benjamin Federlein, Nila Garces-Osorio, Sharon Hermes, Jason Holsclaw, Gene Kuehneman Jr., Michael Maslowski, Amanda Miller, Michael Morris, Luann Moy, Macdonald Phillips, Mark Ramage, Tovah Rom, Nyree Ryder, George Scott, and Roger Thomas. | The nation's private defined benefit (DB) pension system, a key contributor to the financial security of millions of Americans, is in long-term decline. Since 1980, the number of active participants in Pension Benefit Guaranty Corporation (PBGC) insured single employer DB plans has dropped from 27.3 percent of all national private wage and salary workers in 1980, to about 15 percent in 2002, and more recently the PBGC has assumed billions of dollars in unfunded benefit obligations from bankrupt plan sponsors. Some analysts have identified hybrid DB plans like cash balance (CB) plans as a possible means to revitalize this declining system. However, conversions from traditional DB plans to CB plans have sometimes been controversial because of the effect conversions may have on the benefits of workers of different ages. As House and Senate committees consider comprehensive pension reform legislation that includes efforts to resolve uncertainties about CB plans, GAO was asked to (1) review current research about the implications of CB conversions for employee benefits, (2) describe the prevalence and type of transition provisions used to protect workers' benefits in past CB conversions, and (3) estimate the effects of CB conversions on the benefits of individual participants under a hypothetical conversion to a typical CB plan from a typical traditional DB plan. Current pension and economic literature provides little conclusive evidence about the effects of CB plan conversions on benefits. In many cases, data and other methodological issues (e.g., sampling methods) limit the generalization of results. Nonetheless, cash balance research indicates that the effects of a conversion depend on many factors, including the generosity of the CB plan, transition provisions that might limit any adverse effects on current employees, and firm-specific employee demographics. CB plan conversions are posited to have distributional effects on expected pension wealth: younger, more mobile workers usually benefit while older workers with long job tenure are likelier to experience a loss, particularly if they are nearly eligible for early retirement. GAO's analysis of a representative sample of plan conversions determined that most conversions occurred between 1990 and 1999 and primarily in the manufacturing, health care, finance and insurance industries. Most conversions set participants' opening account balances equal to the present value of benefits accrued under the previous plan, although the interest rate used to calculate the balance varied around the 30-year Treasury bond rate. Most plans provided some form of transition provisions to mitigate the potential adverse effects of a conversion on workers' expected benefits for at least some employees. About 47 percent of all conversions grandfathered at least some of the employees into the former traditional DB plan. In most cases, grandfathering eligibility was limited to employees meeting a specified minimum age and/or years of service. GAO's simulations of the effects of conversions on pension benefits show that in conversions from a traditional DB plan to a typical CB plan, most workers, regardless of age, would have received greater benefits under the DB plan. Unless grandfathered into the former plan, older workers experience a greater loss of expected benefits than younger workers. Further, in comparing a typical CB plan to a terminated FAP plan, all vested workers would do better under the CB plan. Also, in conversions from a traditional DB plan to a CB plan of equal cost to the sponsor and more generous than the typical CB plan, while more workers at age 30 have benefit increases under the CB plan, this was not true for those at age 40 and 50. Moreover, in comparing a equal cost CB plan to a terminated FAP plan, again all vested workers would do better under the CB plan. Finally, GAO's comparisons focusing on the lifetime present value of benefits did not change the basic findings of GAO's analysis of monthly benefits. |
Assisted living is usually viewed as a residential care setting for persons who can no longer live independently and who require some supervision or help with activities of daily living (ADL) but may not need the level of skilled care provided in a nursing home. It is promoted by assisted living advocates as a long-term care setting that emphasizes residents’ autonomy, independence, and individual preferences and that can meet their scheduled and unscheduled needs for assistance. Typically, assisted living facilities provide housing, meals, supervision, and assistance with some ADLs and other needs such as medication administration. However, there is no uniform assisted living model, and considerable variation exists in the types of facilities or settings that hold themselves out to be an assisted living facility. In some cases, assisted living facilities may serve residents who meet the level-of-care criteria for admission to a nursing home. Unlike residents of nursing homes, the majority of whom receive some support from Medicaid or Medicare, most residents of assisted living facilities pay for care out of pocket or through other private funding.However, public sources of funding are available to help pay for services for some residents. For example, some states are attempting to control rising Medicaid costs by encouraging the use of assisted living as an alternative to more expensive nursing home care. Currently, 32 states use Medicaid funds to reimburse for services provided to Medicaid beneficiaries residing in assisted living facilities. However, Medicaid payments do not cover the cost of room and board in assisted living facilities. A combination of individuals’ personal resources, residents’ Supplemental Security Income (SSI) payments, and optional state payments pay for these costs. The states have the primary responsibility for overseeing the care that assisted living facilities provide residents, and few federal standards or guidelines govern assisted living. The four states we reviewed vary widely in what they require of these facilities. Generally, state regulations focus on three main areas—requirements for the living unit, admission and retention criteria, and the types and levels of services that may be provided. Some states have set very general criteria for the type of resident who can be served and the maximum level of care that can be provided, while other states have set more specific limits in these areas, such as not serving residents who require 24-hour skilled nursing care. provide housekeeping, laundry, meals, transportation to medical appointments, special diets, and assistance with medications. Many facilities also provide skilled nursing services, skilled therapy services, and hospice care for their residents. More specialized services, such as intravenous (IV) therapy and tube feeding, are least likely to be available. Some services may be provided by facility staff or by staff under contract to the facility. In other cases, the facility may arrange with an outside provider to deliver some services, with residents paying the provider directly, or residents may arrange and pay for services on their own. We found considerable variation among facilities and among states in the needs of the residents they serve. The facilities we visited have some residents who are completely independent and ambulatory, some who have severe cognitive impairments, and some who are bedridden and require significant amounts of skilled nursing care. Residents of assisted living facilities typically need the most assistance from facility staff with medications and bathing. Assistance with dressing and toileting or incontinence care are the next most frequently cited needs, and assistance is needed to a lesser extent with eating, transferring, and walking. The highest level of resident need for staff assistance with ADLs was reported among facilities in Oregon and those in Florida licensed as extended congregate care facilities. In addition, residents often have some degree of cognitive impairment, such as significant short-term memory problems, disorientation much of the time, or Alzheimer’s disease or another form of dementia. incontinent but can manage on their own or with some assistance, have a short-term need for nursing care, or need oxygen supplementation. Less than 10 percent of the facilities admit residents who are bedridden, require ongoing tube feeding, need a ventilator to assist with breathing, or require IV therapy, and most facilities discharge residents who develop these needs. Most facilities in Oregon indicated that they do not admit people who are bedridden, but half typically retain anyone who becomes bedridden while a resident. Given the variation in what is labeled assisted living, prospective residents must rely primarily on information supplied to them by facilities to select one that best meets their needs and preferences. They can obtain information in a variety of ways, including written materials, facility tours, personal interviews, and personal recommendations. However, in order to help prospective residents compare facilities and select the most appropriate setting for their needs, key information should be provided in writing and in advance of their decision to apply for admission. Yet we found that written material often does not contain key information; facilities do not routinely provide prospective residents with important documents, such as a copy of the contract, to use as an aid in decisionmaking; and written materials that are available are sometimes confusing or even misleading. According to consumer advocates and provider associations, consumers need to be informed about the services that will be provided, their costs, and the respective obligations of both the resident and the provider. Such information should include the cost of the basic service package and what it includes; the availability of additional services, who will provide them, and their cost; the circumstances under which costs may change; how the facility monitors resident health care; the qualifications of staff who provide personal care, medications, and health services; discharge criteria, such as when a resident may be required to leave the facility, and the procedures for notifying and relocating the resident; and grievance procedures. The majority of facilities responding to our survey said they generally provide prospective residents with written information about many of their services and costs in advance of their choosing to apply for admission. However, as shown in figure 1, only about half indicated that they provide information on the circumstances under which the cost of services may change, their policy on medication assistance, or their practice for monitoring residents’ needs, and less than half indicated that they provide written information in advance about discharge criteria, staff training and qualifications, or services not covered or available from the facility. residents. Contracts range from a one-page standard form lease to a 55-page document with attachments. Some are written in very fine print, while others are prepared in large, easy-to-read type. Some contracts are complex documents written in specialized legal language, while others are not. Marketing and other written material provided by the facilities also vary widely from a one-page list of basic services and monthly rent to multiple documents of more than 100 pages. We examined written marketing materials and contracts from 60 of the facilities that responded to our survey to determine whether they were complete, clear, and consistent with state laws. While most of the facility materials we reviewed were specific and relatively clear, we found that materials from 20 of the 60 facilities contained language that was unclear or potentially misleading, usually concerning the circumstances under which a resident could be required to leave a facility. Contracts and other written materials we reviewed were often unclear or inconsistent with each other or with requirements of state regulation regarding how long residents could remain as their needs change, resident notification requirements, and other procedural requirements for discharge. For example, the contract from a California facility was vague regarding the circumstances under which a resident could be required to move. It stated that the facility can discharge a resident for good and sufficient cause without elaborating on what the cause might be. The contract also failed to refer to state regulations that provide specific criteria for discharge or eviction. As shown in figure 2, the marketing material one Florida facility uses is potentially misleading in specifying that residents can be assured that if their health changes, the facility can meet their needs and they will not have to move again. However, the facility’s contract specifies a range of health-related criteria for immediate discharge, including changes in a resident’s condition or need for services that the facility cannot provide. The contract of an Oregon facility is inconsistent with requirements of state regulation regarding notification of residents before discharging them. Oregon regulations specify that residents may not be asked to leave without 14 days’ written notice that a facility cannot provide the services they need. However, the facility’s contract specifies that residents can be required to move immediately if they need more care than is available at the facility. So you can be assured if health changes occur, we can meet your needs. And you won't have to deal with the hassles of moving again. Resident may not be asked to leave without 14 days' written notice stating reasons for the request. ... may terminate this Residency Agreement immediately ... Due to changes in your physical or mental condition, supplies, services or procedures are required that ... by certification, licensure, design or staffing cannot provide. similar requirements regarding the type and level of services that assisted living facilities must provide residents. In addition to basic accommodations such as room, board, and housekeeping, all the states require facilities to provide residents with basic services, including assistance with ADLs, ongoing health monitoring, and either the provision of or arrangement for medical services, including transportation to and from those services as needed. All four states require assisted living facilities to conduct an initial assessment of a resident’s health, functional ability, and needs for assistance. They also require that facilities provide residents with reasonable advance notice of discharge or eviction, and they specify certain rights and procedures for residents to appeal or contest a facility’s decision to discharge them. State regulations also generally contain other consumer protection provisions such as those governing resident contracts, criminal background checks for staff, and residents’ rights. All four states require that facilities enter into contracts with residents, but they differ in the level of detail they require in these agreements. In addition, all four states require criminal background checks for direct care staff, and three states—California, Florida, and Oregon—require them for facility administrators as well. State regulations often differ, however, with respect to the level of skilled nursing or medical care that facilities can provide to residents and in the circumstances under which it can be provided. For example, California regulations contain a list of services that facility staff are generally not allowed to provide, such as catheter care, colostomy care, and injections. In contrast, Oregon has no explicit restrictions on the care that facility staff may provide, except that certain nursing tasks must be either assigned or delegated to a caregiver by a registered nurse. In addition, while all four states require facilities to provide some degree of supervision with medications, they differ in the degree to which facility staff can be directly involved in administering medications to residents. For example, in California, facility staff may not administer medications but may only assist residents to take their own medications. Requirements for staff levels, qualifications, and training also vary among the states. For example, Florida requires facilities to maintain a minimum number of full-time staff that is based on the total number of residents, California and Ohio require only that the number of staff be adequate to meet the needs of residents, and Oregon does not have any minimum staffing requirement. To ensure that assisted living facilities comply with the various licensing requirements, all four states conduct periodic inspections or surveys of facilities, and they may also conduct more frequent inspections in response to specific complaints. However, the four states vary in the frequency and content of assisted living facility inspections. The frequency of required licensing inspections ranges from at least twice a year for extended congregate care facilities in Florida to at least once every 2 years for assisted living facilities in Oregon. The content of periodic state surveys is driven primarily by the requirements in state regulations. To assist surveyors, Florida and Ohio have developed detailed guidelines, similar to those used for nursing home inspections. In contrast, surveyors in California and Oregon use a checklist that covers a subset of the regulations and focuses on a few selected elements. In addition to the state licensing agency, other state agencies play a role in the oversight of assisted living facilities. In the four states we examined, the state ombudsman agency has a role in overseeing the quality of care and consumer protection of residents in assisted living. The ombudsmen are intended to serve as advocates to protect the health, safety, welfare, and rights of elderly residents of long-term care facilities and to promote their quality of life. One of their primary responsibilities is to investigate and resolve complaints of residents in long-term care facilities, such as nursing homes, board and care homes, and assisted living facilities. Ombudsmen in Florida are also required to inspect each facility annually to evaluate the residents’ quality of care and quality of life. In two of the four states, Florida and Oregon, APS agencies are responsible for investigating reports of alleged abuse, neglect, or exploitation of assisted living residents; determining their immediate risk and providing necessary emergency services; evaluating the need for and referrals for ongoing protective services; and providing ongoing protective supervision. Given that the states vary in their licensing requirements for assisted living facilities and in their approaches to oversight, the type and frequency of quality-of-care and consumer protection problems identified by the states may not fully portray the care and services the facilities actually provide. Facilities in states with more licensing standards, more frequent inspections, or more agencies involved in oversight may be more likely to have more problems identified and verified. Using available data and reports from state licensing, ombudsman, and APS agencies in the four states, we determined that 27 percent of the 753 facilities in our sample were cited for five or more quality-of-care or consumer protection related problems during 1996 and 1997. Most of these verified problems pertained to quality-of-care rather than consumer protection issues. As table 1 shows, 22 percent of the facilities we sampled had 5 or more verified quality-of-care problems during the period, and 9 percent of the facilities had 10 or more. The most commonly cited quality-of-care problems included inadequate care, staffing, and medication issues. These problems included instances in which a facility was found to be providing inadequate care to residents as well as instances in which a facility did not demonstrate the capacity to provide sufficient care. For example, staffing problems included cases in which residents suffered harm as a result of insufficient numbers of staff in the facility, as well as cases in which facilities had no documentation to substantiate that required caregiver training had been provided. qualifications and training and facilities not having sufficient staff to care for the residents. For example, in an Oregon facility, family members routinely assisted residents by changing soiled garments because the facility did not have enough staff. The third most frequently cited problem concerned medication-related issues, such as not providing residents their prescribed medication, providing them the wrong medication, or storing medication improperly. For example, an Oregon facility was found to have numerous medication problems, including (1) staff inconsistently and inaccurately transcribing physicians’ medication orders to the residents’ medication administration records, (2) medications often being borrowed or shared between residents, (3) one staff member signing out narcotics but another staff member on a different shift administering them to residents, and (4) unlicensed caregivers altering residents’ prescription labels. Commonly cited consumer protection problems included those related to circumstances under which a resident could be required to leave a facility for health or financial reasons and those related to provisions in resident contracts. For example, a resident of an Oregon facility was told on admission that she could stay until she died. However, the facility issued her an eviction notice when she began to wander within the facility, and it raised her monthly charge from approximately $1,600 to more than $6,400. In Florida, a facility was cited for not having all state-required elements in the resident contract, such as the basic daily, weekly, or monthly rates and a list of available services and fees not included in the basic rate. In Florida and Oregon, the two states in which APS agencies have some responsibility for oversight of residents in assisted living facilities, resident abuse was also often cited. In Oregon, the APS agency verified 48 cases of abuse in 21 of the state’s 83 assisted living facilities during 1996 and 1997. In one case, a resident was left on the toilet for 2 hours because the caregiver forgot to return to the resident’s room, and there was no call button within reach. In Florida, the APS agency verified 39 cases of abuse in 25 facilities and 103 cases of neglect in 32 facilities during the 2-year period. Florida cases included an instance in which a 90-year-old resident was admitted to a hospital with a stage IV pressure ulcer and found to be dehydrated and poorly nourished. As a growing number of elderly Americans reach the point where they can no longer live independently, many look to assisted living facilities as a viable, homelike setting to meet their long-term care needs. While many residents may enter assisted living facilities with relatively few or minimal needs for supportive or health services, these needs generally increase with age or with declining health. Some assisted living facilities may be able to accommodate these changing and more intensive needs, while others may not. Fully understanding the strengths and limitations of facilities is important as consumers and their families attempt to make the best choice for what is often a difficult decision. care and protect consumers, on appropriate approaches to ensure compliance with those standards, and on the adequacy of information available to help inform consumers’ choices and decisions. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or other members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed quality-of-care and consumer protection issues in assisted living facilities in California, Florida, Ohio, and Oregon, focusing on: (1) residents' needs and the services provided in assisted living facilities; (2) the extent to which facilities provide consumers with sufficient information for them to choose a facility that is appropriate for their needs; (3) the four states' approaches to oversight of assisted living; and (4) the types of quality-of-care and consumer protection problems they identify. GAO noted that: (1) assisted living facilities vary widely in the types of services they provide and the residents they serve; (2) they range from small, freestanding, independently-owned homes with a few residents to large, corporately owned communities that offer both assisted living and other levels of care to several hundred residents; (3) some assisted living facilities offer only meals, housekeeping, and limited personal assistance, while others provide or arrange for a range of specialized health and related services; (4) they also vary in the extent to which they admit residents with certain needs and whether they retain residents as their needs change; (5) given the variation in what is labelled assisted living, prospective residents must rely on information supplied to them by facilities to select one that best meets their needs and preferences; (6) in many cases, assisted living facilities did not routinely give consumers sufficient information to determine whether a particular facility could meet their needs, for how long, and under what circumstances; (7) moreover, GAO identified numerous examples of vague, misleading, or even contradictory information contained in written materials that facilities provide to consumers; (8) the states have the primary responsibility for the oversight of care furnished to assisted living facility residents; (9) all four states reviewed have licensing requirements that must be met by most facilities providing assisted living services, and state licensing agencies routinely inspect or survey facilities to ensure compliance with state regulations; (10) however, the licensing standards as well as the frequency and content of the periodic inspections vary across the states; (11) given the absence of any uniform standards for assisted facilities across the states and the variation in their oversight approaches, the results of state licensing and monitoring activities on quality-of-care and consumer protection issues also vary, including the frequency of identified problems; (12) however, using available inspection surveys and reports from the other oversight agencies in the four states, GAO determined that the states cited more than 25 percent of the 753 facilities in its sample for five or more quality-of-care or consumer-protection related deficiencies or violations during 1996 and 1997; and (13) state officials attributed most of the common problems identified in assisted living facilities to insufficient staffing and inadequate training, exacerbated by high staff turnover and low pay for caregiver staff. |
SEC was created in 1934 to protect investors and maintain the integrity of the securities market. To accomplish its mission, the agency established four strategic goals: (1) to enforce compliance with federal securities laws, (2) to sustain an effective and flexible regulatory environment, (3) to encourage and promote informed investment decision making, and (4) to maximize the use of SEC’s resources. CFTC, established in 1974, performs a comparable role in the futures industry. Its primary mission is to protect market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity futures and options and to foster open, competitive, and financially sound commodity futures and options markets. CFTC has set three strategic goals to support its mission: (1) to ensure the economic vitality of the commodity futures and option markets; (2) to protect market users and the public; and (3) to ensure market integrity in order to foster open, competitive, and financially sound markets. Both SEC and CFTC are independent agencies that have five-member presidentially-appointed commissions that are led by chairmen who are designated by the President. SEC and CFTC’s headquarters are located in Washington, D.C.; SEC has a combination of 11 regional and district offices; CFTC has 5 regional offices. In keeping with its mission, each agency has a regulatory responsibility to protect investors by ensuring the integrity of the securities and commodity futures markets. Once SEC or CFTC staff conducts an investigation and determines that a person or company has violated the law and should be charged, the agency authorizes a civil suit against the alleged violator in federal district court or a proceeding before an administrative law judge. On finding that a defendant has violated securities or futures laws, the court or the administrative law judge can issue a judgment ordering sanctions such as CMPs, disgorgement, and/or restitution. However, the agencies may decide not to seek disgorgement or restitution because it is found to be unwarranted—for example, if a violator did not make a profit from the illegal activity. Table 1 provides more information on some of the remedies available to a federal district court or an administrative law judge. SEC and CFTC both have collection programs and designated staff to track, collect, and manage CMPs and disgorgement or restitution orders. Specifically, as shown in figure 1, staff in SEC’s Division of Enforcement (Enforcement) use the Case Activity Tracking System (CATS) to track investigations, enforcement actions, and matters under inquiry (issues that have the potential to turn into investigations). When a case has been delinquent for at least 10 days, SEC and CFTC staff can send a demand letter to a violator. The Debt Collection Improvement Act of 1996 (DCIA) requires all federal agencies, including SEC and CFTC, to refer non-tax debt more than 180 days delinquent to the Secretary of the Treasury for purposes of centralized administrative offset. Once such a referral is received, Treasury’s FMS activates the Treasury Offset Program (TOP), under which outstanding debts, including amounts due to SEC or CFTC as a result of judgments or settlement agreements, are collected by the withholding of federal payments that the government owes the debtor, such as tax refunds. During its collection efforts, FMS may negotiate compromise offers with debtors unable to pay the entire amount of a judgment and may accept less than the full amount if doing so is the only way to ensure that the violator pays at least some of the debt owed. SEC and CFTC must approve such offers for violators under their purview and may reject an offer or ask for further information if the supporting documentation is not satisfactory. In general, when a disgorgement fund is established, SEC attorneys can propose appointing a receiver to develop and administer a distribution plan to facilitate the collection of disgorgement and, in the case of Fair Funds, both CMPs and disgorgement, and the distribution of those funds to harmed investors. Receivers act independently of SEC and defendants in conducting their prescribed duties. They have primary responsibility for establishing the distribution plan, including a description of the actions that will be taken to identify harmed investors, and for ensuring that the appropriate taxes are deducted from the monies collected. Before Congress passed SOX, SEC could return only funds collected from disgorgement to persons who had suffered financial harm from securities violations. However, Section 308 (a) of the act allows SEC to add CMPs to disgorgement funds. Section 308 (c) of the act also requires SEC to report on the approaches the agency used, before the Fair Fund provision, to (1) provide compensation to harmed investors and (2) to improve the collection rates for CMPs and disgorgement, in order to establish a benchmark for further action. Our previous reports contained two recommendations that remained open related to SEC’s tracking of collection data. First, in 2002, we recommended that SEC develop appropriate procedures to ensure the accuracy and timeliness of information maintained in the Disgorgement Payment Tracking System (DPTS), which was the tracking system SEC used at the time to monitor disgorgement that had been ordered, waived, or collected. Second, in 2003, we recommended that SEC take the steps necessary to implement an action plan to replace DPTS with a new and improved collection tracking system. SEC has made progress in addressing these two recommendations by discontinuing its use of DPTS, modifying CATS to capture financial information, and establishing an improved procedure for entering data into CATS. Nevertheless, our fiscal year 2004 audit of SEC’s financial statements disclosed inadequate internal controls over its reporting of penalty and disgorgement transactions. SEC plans to address this finding by strengthening its policies and its internal controls over existing processes. In addition, we found, and SEC agreed that opportunities exist to improve CATS’s usefulness. The agency is in the process of upgrading CATS to address the needs of a broader range of users, but the project is in its early stages. Agency staff estimate that it will not be fully complete until 2008. In 2002, we reported that weaknesses in SEC’s procedures for entering and updating data in DPTS resulted in the system containing unreliable data. Our 2002 review of a sample of 57 enforcement cases found that 18 cases, or approximately 32 percent, contained at least one error in the amount of disgorgement ordered, waived, or collected, or in the status of the case or of the individual violators. We found that the sources used as a basis for entering data into DPTS did not always provide the most accurate information. For example, we reported that staff in SEC’s Office of the Secretary, who were responsible for entering data into DPTS, relied heavily on SEC litigation releases that, according to the staff, did not contain all the details of a disgorgement order. The staff also said that they did not independently verify the information in the litigation releases. In January 2003, an independent accountant confirmed that information in DPTS was not current and complete and reported that the system could not be relied upon for financial accounting and reporting purposes. As of October 2003, SEC discontinued its use of DPTS. SEC began using CATS to capture the financial information that DTPS had tracked. This change was part of larger modifications to CATS made in response to a legislative requirement that SEC prepare audited financial statements for submission to Congress and the Office of Management and Budget (OMB). SEC modified CATS by adding fields to capture the necessary financial data—such as the amount of CMPs and disgorgement ordered, collected, and distributed—and established a policy of entering data on the amount of disgorgement and CMPs only if valid supporting documentation was available. SEC staff said they began collecting original source documents—copies of signed and stamped final judgments, administrative orders, and court dockets—from SEC’s headquarters, regional, and district offices. SEC staff also told us that they entered financial data only for those cases with an open enforcement action as of October 1, 2002, the beginning of fiscal year 2003. As of February 2005, SEC staff said that they had entered data on almost all of the approximately 4,500 enforcement cases, which involved over 12,000 defendants and respondents that met SEC’s criterion. We reviewed a sample of 45 cases tracked in CATS and determined that SEC had complied with its policy for improving data entry, which is consistent with our previous recommendation. Specifically, we found supporting source documents for each of the 45 case files we reviewed and were able to compare information from the source documents with the data in CATS (as reflected in a March 2005 printout). However, our comparison identified one $300,000 discrepancy on the amount of disgorgement ordered and entered in CATS. Although SEC has made progress in improving the reliability of CATS collection data, in May 2005, we reported that SEC had inadequate controls over its penalty and disgorgement activities, which increased the risk that such activities would not be completely, accurately, and properly recorded and reported for management’s use in decision making. In response to our findings, SEC stated that the agency plans to strengthen internal controls and policies over its existing recording and reporting process and has begun a multiyear project to upgrade CATS. During this review, we found—and SEC agrees—that opportunities exist to further improve CATS’s usefulness for key system users, including attorneys, case management specialists, and collection monitors in Enforcement. Specifically, we found that CATS does not allow the attorneys in Enforcement to perform customized searches or generate tailored reports on the status of their cases. According to SEC staff, certain search and reporting capabilities are available to a handful of management level staff in the division but not to attorneys, who constitute the bulk of the division’s workforce. By not meeting the attorneys’ needs, CATS does not allow SEC to fully leverage its existing resources, and attorneys are not able to efficiently address their multiple and sometimes competing investigation, litigation, and collection duties. Similarly, we found that CATS currently does not meet all the needs of case management specialists and collection monitors. Some staff, whose positions were recently established to better track and report collection activities, have expressed concerns about CATS’s limited reporting and search capabilities. To compensate for these limitations, we found that collection staff in each of SEC’s headquarters, district, and regional offices are using their own ad hoc collection database—outside of and separate from CATS—to track the status of delinquent cases. According to the collection staff, these databases allow for faster reporting and retrieval of information than CATS but, because they also require the staff to enter some data twice, using additional databases could lead to inefficiencies. To address the various concerns of key users, including attorneys, case management specialists, and collection monitors, and to strengthen the inadequate internal controls identified in the 2004 financial statement audit, SEC has begun a multiyear effort to upgrade CATS. SEC staff said that they are trying to transform what is essentially a case tracking system into a case management system that would be useful to a broader range of users. For example, as part of the upgrade effort, SEC is seeking to allow attorneys to generate customized reports on their cases, search for information in memorandums, and establish a system that would notify staff and remind them of deadlines in their cases. According to SEC’s Office of Information and Technology, the upgraded system is also expected to address the needs of case management specialists and collection monitors by capturing and reporting data they require, eliminating the need for the separate databases. In December 2004, SEC released a draft requirements analysis for the upgraded system that contained steps to address the concerns of SEC’s user community. SEC approved funding for the first phase of the project in June 2005 and, according to staff, the project will be fully complete in 2008. SEC has taken actions consistent with five of eight open recommendations from our previous studies (table 2). The open recommendations that SEC addressed were aimed at improving collection activities—for example, SEC’s practices for referring delinquent cases to FMS—and addressing the need for additional collection resources. However, further actions are needed to fully address three remaining open recommendations, which are designed to improve SEC’s performance measures and program evaluations. Moreover, we identified three new concerns related to SEC’s management of collection staff, including (1) the lack of a formal process for assessing the impact of collection staff efforts, (2) the need for additional routine training and guidance to ensure the effectiveness of collection staff’s efforts, and (3) the need for more formal communication and coordination protocols between the two units that track and maintain CATS data in order to improve the efficiency of collection activities. Our review indicated that, since 2003, SEC has made more timely referral of delinquent cases to FMS and developed a strategy for referring pre-guideline cases—that is, cases that existed at SEC before Enforcement implemented its internal collection guidelines in 2002. The agency has also worked with the SROs to establish fingerprinting guidelines and has begun analyzing data on SROs’ sanctions. In addition, SEC has worked to ensure that the agency makes timely decisions on compromise offers presented by FMS and has increased the resources for handling collections and related tasks. Our 2001 report found that SEC staff lacked clear procedures to follow when referring delinquent cases to FMS for collection, as required by the DCIA. As a result, eligible delinquent debts were not promptly being referred to FMS, in turn hampering FMS’s efforts to collect on SEC’s behalf. In 2003, Enforcement implemented procedures to ensure more timely referrals of delinquent cases, but not enough time had elapsed at the time of our 2003 study to evaluate the effectiveness of the new procedures. However, during this review, we did find that SEC was making referrals to FMS before the 180 day time frame expired. Specifically, from a random sample of 45 cases, we identified and reviewed 6 delinquent cases that were eligible for referral and were able to verify that SEC had referred each of those cases before the 180 day limit. Our 2003 study also found that SEC staff had not identified a strategy for referring pre-guideline cases to FMS and did not know the extent to which the pre-guideline procedures for referring cases were being followed. We recommended that SEC staff establish a strategy that prioritized cases according to their collectability. During this review, SEC management said that all eligible delinquent cases had been referred to FMS for collection when SEC switched from tracking cases in DPTS to tracking them in CATS. Based on our review, we determined that SEC had not prioritized the cases but had assessed all outstanding cases for possible referral to FMS and sent forward the appropriate paperwork when applicable, including for pre-guideline cases. As part of our recent review of 45 randomly selected cases, we examined the referral status of 10 pre-guideline cases and found that only one case was eligible for referral and that SEC staff had referred it to FMS before 180 days expired. During our 2003 study, we examined the application review process for individuals seeking employment in the securities industry. During that review, we found that SEC’s statute did not mandate that SROs such as NASD and NYSE require their member firms to ensure that fingerprints sent to the Federal Bureau of Investigation (FBI) as part of criminal history checks actually belonged to the applicants submitting them. Because this lapse in oversight could have allowed inappropriate persons to enter the securities industry, we recommended that SEC establish controls to ensure that fingerprints sent by SROs to the FBI actually belong to the applicants. In July 2004, SEC and CFTC formed a task force with representatives from several of their SROs to enhance controls over existing fingerprinting guidelines. Using the FBI’s guidance on the best practices for preventing fingerprinting fraud in civil and criminal cases, the task force developed a set of improved fingerprinting guidelines, including a suggestion that applicants present two forms of identification instead of one immediately before fingerprints are taken or submit an attestation form in addition to the standard U4 attestation form. NASD made the fingerprinting guidelines available to its member firms on May 29, 2005. In 1998, we found that SEC was not analyzing industrywide data on disciplinary sanctions imposed by SROs to identify possible disparities that might require further review. We recommended that SEC conduct such an analysis and find ways to improve the SROs’ disciplinary programs. Consistent with this recommendation, SEC developed a database to collect information on the SROs’ disciplinary actions, but our 2003 study found that problems with the database were hampering SEC’s ability to analyze the data. For example, the database did not capture multiple violations or multiple parties in a single case and did not support multiple users. We made a follow-on recommendation in our 2003 report that SEC analyze the data that had been collected on the SROs’ disciplinary programs, address any findings that resulted from the analysis, and establish a time frame for implementing a new database. As we recommended, SEC has begun analyzing data on disciplinary actions that the SROs took in 2003 and 2004. According to SEC staff, the analyses have shown that sanctioning practices among SROs differ primarily because the facts and circumstances of the cases vary—for instance, the number of defendants involved or presence of other violations. SEC staff said that SEC will use the results of the analyses to determine the scope and timing of future SRO inspections. Also, as we recommended, SEC’s Office of Compliance Examinations and Inspections has sought assistance from the agency’s Office of Information Technology to develop a new, more reliable Web-based database that is scheduled to be deployed in September 2005. According to SEC staff, using the new database, SROs will be able to submit data to SEC online, an innovation that is expected to reduce data entry errors and increase the amount of time SEC staff have to spend on mission-related work such as inspecting SROs and examining broker-dealers. The new database is also expected to provide virtually unlimited storage capacity, improved reporting capability, and greater stability. In 2001, we found that SEC had not always made prompt decisions on compromise offers submitted by FMS, reducing the likelihood of collecting on the debts. At that time, we recommended that SEC continue to work with FMS to ensure that compromise offers presented by FMS were approved in a timely manner. During this study, we found that SEC had been accepting or rejecting compromise offers within 30 days of receiving them from FMS, as required by SEC’s internal policy. To ensure more timely responses, SEC management assigned one staff member to monitor and track compromise offers, maintain a schedule log, and serve as a liaison with FMS to handle missing documents or other problems. According to SEC data, SEC received 12 compromise offers via e-mail between July 16, 2003, and January 6, 2005, and was able to decide on 7 of them within 30 days. The other five compromise offers were held up because of problems with missing documentation. SEC’s procedures require staff to use a variety of documents in assessing compromise offers, including credit bureau reports, recent financial statements, and tax returns for the preceding 3 years. However, until early 2005, FMS did not require its staff to submit tax returns to SEC along with compromise offers. The cases that were held up at SEC because of lack of documentation all involved tax returns—in one case, the returns were illegible, and in four they were missing altogether. On February 5, 2005, FMS issued a technical bulletin that directed staff to submit copies of tax returns for the 3 relevant years to SEC with all compromise offers. According to FMS, these new instructions should resolve any problems with missing documents and enable SEC to meet the 30-day deadline for deciding on compromise offers. In past studies, we found that SEC’s Enforcement staff attorneys, who are responsible for collecting disgorgement, had other duties and competing priorities that hindered their collection efforts. For example, depending on the office to which they were assigned, attorneys were responsible for a variety of functions, including investigating potential violations of securities law, recommending actions SEC should take when violations were found, prosecuting SEC’s civil suits, negotiating settlements, and conducting collection activities for CMPs levied. We recommended in 2002 that SEC consider contracting out some collection activities and increase its collection staff. Consistent with our recommendation, in 2003 SEC assessed the feasibility of contracting with private collection agents and proposed legislative changes that would allow the agency to contact with private collection agents. Furthermore, SEC created and filled over 20 positions, including collection attorneys, paralegals, monitors, and case management specialists in its headquarters, district, and regional offices to assist in implementing collection guidelines that the agency created in response to our 2002 recommendation that it establish such criteria, so that collections could be maximized. Below are brief descriptions of the collection staff’s roles and duties: SEC hired three attorneys to pursue collection efforts in headquarters. These attorneys review the evidence from initial asset searches to determine whether SEC should continue with collection activities or refer the case to FMS, and they advise SEC’s regional staff attorneys on their collection cases. The lead attorney also manages SEC’s collection unit, develops policies (including the agency’s collection guidelines), and trains staff on the collection process. In 2003, SEC created 13 case management specialist positions to assist attorneys with administrative tasks associated with their investigations. The specialists perform data entry tasks and track enforcement matters. Depending on the location, the number of attorneys that each specialist supports varies from smaller to larger caseloads. For example, in one region, a specialist supports 21–24 staff attorneys and in another approximately 50. To help resolve delinquent cases, SEC also designated existing staff in each of the 11 regional offices to monitor collection activities as a collateral duty and created and filled two collection paralegal positions for headquarters. The monitors are responsible for keeping staff and collection attorneys apprised of upcoming deadlines, assisting in referring delinquent cases to FMS, and maintaining a collection database that is separate from CATS for the Enforcement Division. We found that SEC had made some progress in addressing our remaining three open recommendations related to (1) establishing performance measures to better track the effectiveness of SEC’s collection efforts; (2) tracking, on an aggregate and individual basis, both receivers’ fees and the amounts distributed to harmed investors to ensure that investor recovery is maximized; and (3) implementing collection guidelines and developing controls to ensure that staff follow the guidelines. However, as part of this review, we found that the agency could take further action on these management practices to improve these areas. Under the Government Performance and Results Act, federal agencies are held accountable for achieving program results and are required to set goals and measure their performance in achieving them. We reported in 2002 that SEC’s strategic and annual performance plans did not clearly lay out the priority that disgorgement collection should receive in relation to SEC’s other goals and did not include collection-related performance measures. Further, we identified several limitations in using the agency’s disgorgement collection rate as a measure of the agency’s effectiveness. For example, the rate is heavily influenced by SEC’s success in collecting or not collecting on a few large cases and by factors that are beyond a regulator’s control, such as violators’ ability to pay. We suggested other measures that SEC could consider, including tracking the percentage of disgorgement funds returned to harmed investors, measuring the timeliness of various collection actions, and tracking the number of violators ordered to pay disgorgement who go on to commit other violations. The last measure would help determine whether the agency’s disgorgement orders were having a deterrent effect. During this review, we found that SEC had developed a performance measure for timeliness and included it in the agency’s 2004 annual performance plan but had not collected data on this measure or reported on its results. The agency’s timeliness measure, according to SEC’s 2004 Performance Plan, is the “number and percent of defendants/respondents subject to delinquent disgorgement orders during the fiscal year for which the Enforcement staff did not formulate a judgment recovery plan within 60 days after the debt became delinquent.” This measure could potentially be useful in tracking staff efforts to recover delinquent debt and comply with SEC’s recently established collection guidelines. However, in its 2004–2009 Strategic Plan and 2004 Performance and Accountability Report, SEC continued to use only the collection rate as its sole measure of collection performance. SEC staff acknowledged—and we have previously noted—that using only the collection rate had inherent limitations but added that the agency continued to use it because Congress and other agencies had come to expect that SEC would report the measure. While reporting the collection rate may serve other goals, it is not a meaningful performance measure and, as a result, SEC cannot fully determine the effectiveness of its collection program. During this review, we calculated SEC’s collection rate for all cases (open and closed), as well as a separate rate for closed cases only. As shown in table 3, SEC’s penalty collection rate for closed cases between September 2002 and December 2004 ranged from 72 percent to 100 percent and for all cases from 34 percent to 86 percent. While the percentage collected is a limited measure, as noted above, these rates represent a significant increase from the 40 percent collection rate for CMPs SEC averaged from January 1997 through August 2002. During 2003, SEC imposed about $1 billion in penalties, up from about $85 million in 2002. According to SEC staff, from September 2002 through August 2004, SEC brought enforcement actions against large, well-financed entities such as mutual funds and major corporations that had been accused of financial fraud. Because SEC collected most of the penalties imposed in these large cases, its collection rate was significantly higher than in previous years. SEC management told us that the agency’s collection rate is heavily influenced by the nature of the entity that the agency sues and noted that, if SEC sued companies or issuers that were not well-financed, its collection rate would likely fall. As shown in table 4, for disgorgement levied on closed cases between September 2002 and August 2004, SEC’s collection rate ranged from 56 percent to 100 percent and from 13 percent to 34 percent for all cases during the same period. These rates also represent a substantial increase over the collection rate of 14 percent for all cases involving a disgorgement order between 1995 and November 2001. We reported in 2002 that the collection rate for CMPs tends to be higher than the collection rate for disgorgement because SEC can take into account a violator’s ability to pay when imposing a penalty but cannot do so when imposing a disgorgement. We also reported that many violators ordered to pay a penalty are members of the securities industry and are motivated to pay their CMPs in order to maintain their reputation within the industry. However, we reported that many violators ordered to pay large disgorgement orders are either not members of the securities industry or have no desire to remain so. As we have discussed in previous reports, these factors make using the disgorgement collection rate as the sole performance measure problematic and highlight the need for SEC to continue its efforts to develop alternative performance measures for collection activities. In previous GAO reports, we determined that SEC did not have a centralized system for monitoring information on distribution amounts and receiver fees, making it difficult for the agency to assess the overall effectiveness of distribution efforts and ensure that harmed investors received the maximum amount of recovered funds. We recommended that SEC better manage disgorgement cases by tracking this information on both an aggregate and individual case basis. In the past, SEC stated that it did not believe that aggregating this data would help determine how well it was managing collection cases or that being able to assess the reasonableness of receiver fees would necessarily provide information on whether defrauded investors should have or could have received more funds. SEC had also identified a number of obstacles that hampered its ability to address our recommendation—for example, the CATS database, which was designed to track individual case information but not to aggregate it. Further, we were told that the agency lacked the information necessary to identify the amounts allocated to defrauded investors and receivers’ fees, and SEC staff told us that they did not always know how much receivers were paid. As a result, the agency has had to rely on the courts to provide this information, but the courts have not consistently provided it. During our work for this report, we learned that, despite its concerns about these obstacles, SEC had begun to make some progress in addressing this open recommendation. Specifically, SEC has updated CATS to identify distribution data and is in the process of drafting a standard form that will be used to request information from the courts on receivers’ fees. If the courts respond to SEC’s requests for this information, the agency should be better able to assess how well overall distribution efforts are working and whether harmed investors are being reimbursed the maximum amounts possible for actions taken against them by securities law violators. In our 2002 study, we found that SEC’s collection program lacked clear policies and procedures specifying the actions that staff should take to pursue collections. We commented that the lack of such guidance affected both staff and management, since staff were not held accountable to any clear standards and management could not determine whether staff took all collection actions promptly, or ensure that opportunities to maximize collection were not missed. We recommended that SEC develop and implement collection guidelines and develop controls to ensure that staff follow them. Consistent with the first part of this recommendation, SEC has developed and implemented collection guidelines that specify the various collection actions staff can take, explain when such activities should be considered, and stipulate how frequently they should be performed. SEC has also hired additional resources to perform specific tasks outlined in its collection guidelines. However, uneven supervision has reduced the assurance that staff are following these guidelines. According to SEC management, the primary control in place to ensure that staff followed these guidelines is a periodic review, conducted by the lead collection attorney, of the 12 individual collection databases that collection staff use to track delinquent cases. However, this periodic review may not be timely or effective since it could result in noncompliance with the guidelines or errors being undetected for an unspecified amount of time. Further, we found that some of the individuals involved in the collection process in some of SEC’s regional offices—specifically monitors and case management specialists—have supervisors who are not directly involved and may lack detailed knowledge of the collection guidelines. In addition, the level of supervision varies by location. For example, one of the regional case management specialists told us that an associate regional director oversees her work by reviewing a weekly CATS report that she generates. At another location, a case management specialist also told us that an assistant district administrator supervises her but does not formally monitor her work. We identified three additional new areas of concern that could impede SEC’s progress in realizing the benefits of improved collection efforts. Specifically, we found that SEC lacks (1) a formal mechanism to monitor the effectiveness of the collection staff, (2) appropriate guidance and training for some collection staff, and (3) effective communication and coordination between two key units responsible for tracking collection activity. First, SEC does not have a formal mechanism to assess whether the increased collection resources are being used effectively. SEC management believes that the new collection resources have increased overall collection efforts and allowed enforcement attorneys to devote more time to investigating potential violations by reassigning some collection-related administrative duties. However, without a formal process for determining the effectiveness of the increased resources, SEC cannot validate these benefits. SEC management explained that they have focused their attention on making changes to the collection process in preparation for the first external financial audit and thus have not yet been able to focus on assessing the effectiveness of the collection staff’s activities. As SEC’s collection process stabilizes, a formal approach to gathering and analyzing input from Enforcement staff attorneys that have interacted with collection staff would help determine whether the new staff positions were being used effectively and whether any improvements could be made. Second, our interviews with some of the case management specialists and collection monitors disclosed that some of the staff felt that they had not received sufficient guidance or training on new protocols for the collection procedures. SEC management told us that the agency had periodically added new protocols to the established procedures for tracking penalty and disgorgement data to help the agency prepare for its first external financial audit. In particular, SEC staff said that they had revised some internal controls and policies and procedures related to data entry and added additional data entry screens to CATS. Although new protocols addressing these changes have been communicated to the collection staff through various methods such as e-mails, monthly meetings, and monthly notifications, some of the collection staff identified the need for additional guidance. Moreover, some of them said that they would like to receive training on issues addressed in policy updates, as well as receive more formal training in how to interpret legal documentation such as judgments and how to work with FMS on collection issues. SEC management said that the agency has planned a workshop for the staff in late 2005 to provide information on these and related issues and anticipates that it will help meet some of the needs that staff have identified. Such attention should help the collection staff perform their duties more effectively. Third, since August 2004, Enforcement and the Office of Financial Management (OFM) staff have shared responsibility for tracking and maintaining penalty and disgorgement data in CATS, but the units lack formal procedures to ensure that their staffs communicate and coordinate activities. To prepare for the external financial statement audit, SEC transferred responsibility for entering financial data in CATS from Enforcement to OFM, since penalty and disgorgement activity are recorded in SEC’s financial statements. Under the terms of the transfer, Enforcement would still enter most of the case-related data into CATS, such as the names of defendants and dates of judgments and orders, and OFM would enter data on the amounts of money ordered, collected, and distributed. However, this division of responsibilities has not always been effective. For instance, Enforcement staff need timely and complete information on amounts that have been collected in order to take appropriate collection actions, but communication with OFM staff is not always consistent and timely, making coordination difficult. As an example, when OFM staff enter financial data into CATS, they do not always notify Enforcement, so that Enforcement staff must periodically check CATS to find out whether money has been collected and, in some instances, must contact OFM to determine the status of a case. Further, OFM is not always timely in entering data, resulting in delays that could hinder Enforcement staff’s collection efforts. SEC demonstrated its commitment to effectively implementing the Fair Fund provision of SOX by taking several steps. First, agency management has issued clear guidance to staff on how to generate Fair Fund monies from penalized offenders. As of April 2005, SEC has designated almost $4.8 billion to be returned to harmed investors although, as of the date of this report, very little of it had been distributed, primarily because of time consuming tasks that have to be completed before distribution can take place. Second, we found that SEC staff had begun to collect and aggregate Fair Fund data to help in assessing the agency’s performance in distributing funds to harmed investors. Finally, SEC has begun to address reporting requirements in its efforts to collect funds for distribution and in the methods it is using to maximize investor recovery. According to SEC staff, the agency is committed to using the Fair Fund provision, which allows money from CMPs to be added to disgorgement amounts, to help defrauded investors obtain more of the funds owed to them. SEC has issued guidance to its staff on interpreting and applying the provision—for example, explaining that ordering a disgorgement for as little as $1 can qualify a case as a Fair Fund case and make CMPs eligible for distribution. Among other cases, SEC applied this method in SEC v. Lucent Technologies, in which the company agreed to pay a settlement of $25 million in CMPs and $1 in disgorgement. In this particular case, SEC charged the company and 10 individuals with fraudulently and improperly recognizing approximately $1.148 billion of revenue and $470 million in pretax income during fiscal year 2000—a violation of generally accepted accounting principles (GAAP). The guidance also highlights several other important aspects of the Fair Fund provision. It discusses the legal and practical aspects of seeking disgorgement, including estimating the amount the defendant obtained illegally. It also instructs staff to include language preserving SEC’s ability to establish a Fair Fund at a later date in cases that are settled early before it has been decided whether the Fair Fund provision will be invoked. Finally, SEC requires that language be added to judgments in all Fair Fund cases prohibiting violators from using amounts collected under a judgment to offset potential later judgments levied in third-party lawsuits. Because allowing such offsets could reduce the amount of money investors received in these lawsuits, the SEC language also stipulates that even if a court allows offset language in later judgments, the violator is obligated to pay the difference. This language is intended to aid attorneys in fairly and fully applying the Fair Fund provision and to help ensure that violators do not sidestep the intent of the Fair Fund provision. According to agency documents, SEC staff have successfully applied the Fair Fund provision in at least 75 cases since 2002 and, as a result of these efforts, more than $4.8 billion in disgorgement and CMPs were designated for return to harmed investors as of April 2005. At the time of our review, although SEC has collected money for 73 of the 75 cases they identified, approximately $60 million from only three cases have been distributed to harmed investors, and funds totaling about $25 million from only one other case were being readied for distribution. SEC’s rules regarding Fair Funds and disgorgement funds states that “unless ordered otherwise, the Division of Enforcement shall submit a proposed plan no later than 60 days after the respondent has turned over the disgorgement….” However, SEC staff observed that appointing a receiver to establish a plan for distributing funds can sometimes be a lengthy process that can be further complicated by factors beyond the agency’s control. For example, in one case, an analysis of an extensive trading history had to be conducted, in order to determine issues such as the extent to which funds were diluted and the shareholders were harmed, and to determine how to deal with tax considerations for the distribution recipients. In another instance, a company agreed to pay $80 million in disgorgement, CMPs, and interest, but a pending criminal indictment prevented SEC from distributing any funds until the criminal case is resolved. SEC acknowledged that the agency has an obligation to distribute funds to harmed investors in a timely manner and that SEC collection attorneys have begun to take on some of the tasks associated with distribution in an effort to expedite the distribution process. The collection attorneys also told us that they are working to develop a more standardized process for distributing funds to help ensure that staff attorneys perform this function properly. During this review, we found that SEC implemented the Fair Fund provision without having a method in place to systematically track the number and amount of monies ordered, collected, and distributed, in part because CATS was not initially designed to identify this information. To gather information on Fair Fund cases, SEC management has had to request that staff attorneys submit ad hoc summaries of these cases, but the lack of a standard reporting format means that the information may be inconsistent. SEC management also has used data from CATS, Treasury’s Bureau of Public Debt database, and discussions with attorneys to compile information on Fair Fund cases, but this method also has limitations because it does not employ a reliable data entry process using source documents that account for all the cases. Without reliable, accessible data, SEC is limited in its ability to evaluate the overall effectiveness of its implementation of the Fair Fund provision. During this review, we found that SEC had started to take steps to track data on Fair Fund cases by adding fields to CATS that allow case management specialists to enter appropriate data, including receivers’ fees, amounts distributed for Fair Fund and disgorgement cases, and amounts returned to Treasury. In addition, SEC staff said that information on all Fair Fund cases created before these modifications would be retroactively entered into the system. According to SEC management, SEC plans to compile and aggregate Fair Fund data, such as the number of cases and the associated monetary amounts, in order to better assess the provision’s impact. We also learned that SEC was using the amounts designated for return to harmed investors as an indicator of the program’s success. For example, when describing the Fair Fund program in SEC’s 2006 budget request, issued in February 2005, the agency stated that over $3.5 billion in disgorgement and CMPs had been designated for this purpose. However, these amounts alone may not be appropriate measures of the program’s success since harmed investors do not necessarily receive all the money. A more comprehensive indicator could include the amount of CMPs ordered as a direct result of the Fair Fund provision, the actual amounts distributed, and the length of time required to distribute the funds. SEC management told us that the agency plans to add an indicator on Fair Fund distribution to its agencywide performance “dashboard” that tracks the amount of funds returned to harmed investors. Such an indicator would be a useful output measure but would not provide complete feedback on the effectiveness with which SEC executed its responsibilities. Nevertheless, to calculate its planned measure, SEC would have to collect data on how much money was actually returned to investors once taxes, fees, and other administrative costs were subtracted from the total amount collected. As required by Section 308(c) of SOX, SEC issued a report in January 2003 detailing the agency’s efforts in collecting funds to be returned to harmed investors and the methods used to maximize this recovery. The approaches involved “real time” enforcement initiatives such as temporary restraining orders, asset freezes, and the appointment of receivers to maximize recovery. SEC’s report also suggested some legislative changes that would assist the agency in maximizing recovery for defrauded investors, including the following three: technical amendment to the Fair Fund provision that would permit SEC to include CMPs in Fair Funds for distribution to harmed investors in cases that do not involve disgorgement; proposal that excludes securities cases from state law property exemptions, so that violators could not use these “homestead” exemptions to shield their assets from judgments and administrative orders; and grant of express authority to SEC to contract with private collection agents. These proposed changes, in addition to others pertaining to enhancing enforcement capabilities and assisting defrauded investors, were included in H.R. 2179, the Securities Fraud Deterrence and Investor Restitution Act of 2003, which was introduced in the 108th Congress. The bill was reported favorably to the full House by the House Financial Services Committee, but no vote took place. In our 2001 report, we recommended that CFTC take steps to ensure that delinquent CMPs were promptly referred to FMS. In our 2003 report, we also recommended that CFTC work with SEC and the SROs to address weaknesses in fingerprinting procedures to ensure that only appropriate persons are admitted to the futures industry. As part of this review, we found that CFTC fully addressed these remaining open recommendations. We also updated our calculation of CFTC’s collection rates since our 2003 report (see appendix II). In 2001, we recommended that CFTC implement its Office of Inspector General’s (OIG) recommendation to create formal procedures to ensure that delinquent CMPs were sent to FMS within the required time frames. In an April 2001 report, CFTC’s OIG found that CFTC staff were not referring the delinquent debts to FMS in a timely manner, potentially limiting FMS’s ability to collect the monies owed. In 2004, CFTC’s OIG followed up on this issue and determined that for the period from 2001 through 2004, CFTC had consistently complied with DCIA by referring delinquent debt to FMS within the allowable 180 days for collection services. CFTC’s OIG reviewed 21 uncollected penalty cases out of a universe of 187 CMPs that were eligible for referral between October 1, 2001, and August 31, 2004. Of the 21 cases, 8 were excluded from referral to FMS because they were either referred to the Department of Justice for further review or were in litigation. CFTC’s OIG found that of the remaining 13 cases, CFTC had sent 12 to FMS within the required time frame. One case was not received by FMS due to an undetected facsimile transmission error. CFTC officials have stated that their Enforcement division has changed the way it transmits information when referring cases and now uses certified mail, which provides a receipt to confirm that the information has been delivered. During our 2003 study, we found that CFTC’s statute, like SEC’s, did not mandate that SROs such as NFA require member firms to ensure the validity of fingerprints submitted to the FBI by applicants of the futures industry. In response to our recommendation that CFTC address this weakness, CFTC, like SEC, worked with futures and securities regulators to prepare recommended guidance for best practices for fingerprinting procedures. According to CFTC officials, the agency agrees with the other task force members that issuing “best practices” guidance to the futures and securities industries could help prevent applicants from using someone else’s fingerprints as their own. CFTC officials said that NFA made some adjustments to the fingerprinting guidelines developed by the task force to tailor them to the futures industry and that the updated guidance was made available to NFA members on July 29, 2005. Over the past 2 years, SEC has undertaken a number of initiatives to enhance its ability to collect and track CMPs and disgorgement data and, to a lesser extent, monitor program effectiveness. SEC’s initiatives represent a significant investment by the agency to improve its program. However, our recent audit of SEC’s 2004 financial statement and this follow-up review showed that SEC needs to continue improving various aspects of its collection program. In response to our financial audit report, SEC has planned a number of corrective actions to address the identified control weaknesses related to the recording and reporting of penalty and disgorgement transactions. While SEC continues to address these internal control issues, it could also take steps to further maximize the effectiveness of its additional collection resources and strengthen the management of its collection program. Overall, SEC staff lacks some of the tools and support it needs to conduct collection and track collection data. In particular, the inadequacies that exist within the CATS database, uneven supervision of collection staff, and weak coordination between the two units responsible for tracking collection data collectively reduce the efficiency with which SEC staff carry out their responsibilities. Just as important, SEC management also does not have the appropriate tools to evaluate the effectiveness of the agency’s collection activities. Since expanding its collection staff, SEC has not formally assessed how additional resources have assisted in the collection process and alleviated staff attorneys’ responsibilities. Without a formal approach, SEC is not able to determine whether its resources are being optimally utilized. SEC also still does not have meaningful performance measures to assess the effectiveness of the agency’s collection activities, inhibiting management’s ability to identify and make adjustments as needed. Finally, SEC management has started to collect data to centrally monitor distribution activities to assess how well it is returning disgorgement funds to harmed investors, but these actions have not yet been completed. The Fair Fund provision has allowed for the potential for greater return of monies to harmed investors from securities laws violators. SEC has demonstrated its commitment in using this provision, and its implementation efforts are noteworthy. Nevertheless, to date, the majority of the monies collected under the provision have not been distributed to harmed investors. We recognize that, as with other distribution funds, the complexity and circumstances of a case could contribute to the lapse in time between the collection of the monies and subsequent distribution. However, because of SEC’s traditional focus on deterring fraud and the relatively few distributions that have taken place, we are concerned that SEC may not be able to ensure the timely distribution of the growing sum of money that has been collected as a result of the establishment of Fair Funds. At a minimum, SEC should have reliable and meaningful data available to monitor the timely and complete distribution of Fair Fund monies. SEC has taken actions to strengthen its data tracking and management practices for its penalty and disgorgement collection program. However, the agency could take additional steps to ensure that collection staff members have the necessary tools and support to carry out their responsibilities efficiently and are being used effectively. Therefore, we recommend that the Chairman, SEC, take the following three actions: Develop a method to ensure that case management specialists and collection monitors in Enforcement receive consistent supervision and the necessary monitoring and guidance to carry out their duties and that SEC management can ensure that staff are following the collection guidelines. Establish procedures for staff in the OFM to notify Enforcement staff on a timely basis about data entered into CATS. Determine the effectiveness of new case management specialists, collection monitors, and collection attorneys by using formal approaches such as periodically surveying staff attorneys that interact with collection staff to evaluate the assistance the staff provides. In addition, we recommend that the Chairman, SEC, take the following three actions, including two that we have previously recommended, to continue to ensure that the collection program meets its goal of effectively deterring securities law violations and returning funds to harmed investors: Continue to identify and establish appropriate performance measures to gauge the effectiveness of collection activities and begin collecting and tracking data to implement the timeliness measure presented in SEC’s 2004 annual performance plan, if SEC still considers that measure appropriate. Ensure that management determines, on an aggregate basis, (1) the amount of disgorgement distributed each year to harmed investors, (2) the amount of CMPs sent to Treasury, and (3) the amount of receivers’ fees and other specialists’ fees and that the agency uses this information to more objectively monitor the distribution of monies to harmed investors. Ensure that management establishes a procedure for consistently collecting and aggregating its Fair Funds data to assist in the monitoring and managing of the distribution of monies to harmed investors and establishes measures to evaluate the timeliness and completeness of distribution efforts. We requested comments on a draft of this report from SEC and CFTC. Both agencies provided technical comments, which we have incorporated into the final report, as appropriate. SEC also provided written comments that are reprinted in appendix III. In its comments, SEC acknowledged that the Division of Enforcement’s efforts in data tracking and management practices are still in their early stages, but said that the agency is working diligently to strengthen its collection program. SEC also expressed agreement with our findings and all six of our recommendations and said that it is working to implement each of the recommendations. Specifically, SEC is in the process of (1) developing reports and training programs that will allow for consistent monitoring of the collection program nationwide, (2) developing a system by which OFM can notify Enforcement about data entered into SEC’s case tracking system, (3) determining the effectiveness of new collection processes and staff, (4) revising current performance measures to more effectively determine program performance, (5) collecting information on the amount of penalties and disgorgement distributed to investors and paid to receivers, and (6) developing systems to collect data on Fair Fund cases. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will provide copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and its Subcommittee on Securities and Investment; the Chairmen, House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises; and other interested congressional committees. We will also send copies to the Chairman of SEC, the Chairman of CFTC, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To discuss the Securities and Exchange Commission’s (SEC) progress in addressing recommendations made in our 2002 and 2003 reports that were aimed at improving the agency’s tracking of data on civil monetary penalties (CMP) and disgorgement, we interviewed staff in SEC’s Division of Enforcement (Enforcement), Office of Financial Management (OFM), and Office of Information Technology (OIT) to obtain information on efforts they have made to implement the recommendations. To gain further information on SEC’s activities in upgrading its tracking system, we reviewed relevant documents, such as an internal Case Activity Tracking System (CATS) data entry guide (with associated procedures), sample data entry forms completed by Enforcement attorneys, a draft systems definition document for an upgraded case tracking system prepared by an SEC-hired contractor, an assessment of the accuracy and completeness of CATS data conducted by SEC’s Office of Inspector General, and GAO’s audit of SEC’s financial statements for fiscal year 2004. To assess the reliability of penalty and disgorgement data that SEC provided for the calculation of its collection rate, we interviewed staff in Enforcement and OFM about the new policies and procedures for entering data into CATS. We selected a random sample of 45 cases tracked in CATS to test the improved procedures by (1) reviewing case files for valid supporting source documents maintained by Enforcement staff, including final judgments, administrative orders, and court dockets, and (2) verifying data accuracy for penalty and disgorgement amounts ordered by comparing data recorded in source documents with data entered in CATS as of March 2005. We concluded that for purposes of this report, the data provided by SEC were sufficiently reliable. To assess the steps SEC has taken to address our earlier recommendations on its management of the collection program and related issues, we conducted relevant testing of procedures, including those related to referrals and approvals of compromise offers, interviewed staff from SEC and other agencies involved in SEC’s collection activities, and reviewed pertinent documents. Specifically, to evaluate the effectiveness of SEC’s procedures for referring delinquent cases to the Department of the Treasury’s (Treasury) Financial Management Service (FMS) both before and after the collection guidelines were established, we interviewed SEC staff to discuss the activities they took recently to refer delinquent cases to FMS. Using our sample of 45 cases, we identified those that met the criteria for referral and used FMS’s records to verify that the cases had been referred and determine how quickly SEC submitted the referrals. Next, to assess SEC’s efforts to address our 2003 recommendation that the agency work with the securities and futures self-regulatory organizations (SRO) to address weaknesses in controls over fingerprinting procedures, we interviewed SEC staff to discuss actions taken since we made our recommendation and the status of the fingerprinting guidelines. We also obtained a draft copy of the guidelines and reviewed the additional controls that had been proposed to prevent inappropriate persons from being admitted to the securities industry. To assess SEC’s progress in tracking SROs’ disciplinary actions and in implementing a new database to track them—a recommendation from our 2003 report—we reviewed the results of the analyses that SEC’s Office of Compliance Inspections and Examinations (OCIE) conducted of these actions, as of May 2005, and an internal planning document that OIT had prepared. We also interviewed OCIE and OIT staff about the efforts each office had made to address the recommendation. Further, to address a recommendation related to approval of compromise offers from FMS, we assessed SEC’s efforts to improve its timeliness by obtaining and analyzing data from SEC and FMS on all of the 12 compromise offers presented by FMS between July 15, 2003, and January 6, 2005, to determine whether SEC had met its internal time frame. We also interviewed SEC and FMS staff to discuss the effectiveness of SEC’s policies and procedures and to obtain information on SEC’s efforts to work with FMS to ensure the timely approval of offers. In addition, to determine whether SEC had implemented our 2002 recommendation that it complete an evaluation of options for addressing its competing priorities and increasing workload by assessing the feasibility of contracting out certain collection functions or increasing staff devoted to collections, we reviewed SEC’s study pursuant to a mandate in Sarbanes-Oxley Act of 2002 (SOX) to obtain the results of the feasibility assessment. We also reviewed and followed up on the status of the Securities Fraud Deterrence and Investor Restitution Act, introduced in the 108th Congress, which included a number of legislative proposals that SEC had recommended in its study, such as contracting with private collection agencies to collect delinquent debt owed to the agency. Further, we interviewed SEC staff to discuss recent measures taken by the agency to increase its collection staff. Moreover, to determine if SEC had established alternative measures to its collection rates, as recommended in our 2002 report, we reviewed the agency’s 2004-2009 Strategic Plan, 2004 Performance Plan, and 2004 Performance and Accountability Report for collection indicators and interviewed staff in Enforcement to obtain their views on using alternative measures. In addition, to determine whether SEC had promptly implemented its collection guidelines and taken action to ensure that staff followed them, we reviewed the collection guidelines and job descriptions for case management specialists. We conducted structured interviews with nine collection staff members, including two attorneys, one regional collection monitor, one paralegal, and five case management specialists, three of whom also perform collection monitors’ duties, to discuss their duties in relations to the collection guidelines and their views on the level of training they have received. SEC management selected these individuals based on our criteria that we speak with one- third of the new collection staff. These staff members worked in headquarters and regional offices in Atlanta, Boston, Denver, and Miami. Finally, we reviewed collection checklists and screen printouts from the databases used by collection staff and interviewed SEC officials who manage the collection program and staff to discuss their role in SEC’s case tracking and collection process. To evaluate SEC’s implementation of the Fair Fund provision, we reviewed Section 308 (a–c) of the act and performed a legislative search and legal analyses. To determine how and when SEC applies the provision, we reviewed information from SEC’s Web site, the agency’s CATS database, a sample of distribution plans and rulings on cases to which the Fair Fund provision had been attached, and SEC’s Rules on Fair Fund and Disgorgement Plans and interviewed relevant SEC staff. Further, to determine the number of cases and the amount of CMPs and disgorgement ordered and collected since the SOX was implemented in 2002, we reviewed two internal documents that summarized Fair Fund cases and amounts dated June 30, 2004, and April 22, 2005, and interviewed SEC staff on their use of the data. In addition, to gain a better understanding of the distribution process, we interviewed SEC staff on the data and controls they used to ensure that appropriate amounts were being returned to harmed investors. Moreover, Section 308(c) of SOX required that SEC report on (1) enforcement actions that SEC took to obtain CMPs or disgorgement for the 5-year period prior to the act’s implementation and (2) methods SEC used to ensure that injured investors were being fairly compensated. SEC issued this report in January 2003, and we reviewed it to determine if SEC had met the legislation’s requirement. We also performed a legal analysis to assess whether receiving Fair Funds affected a harmed investor’s ability to sue a violator through private litigation. To describe the actions CFTC has taken to address previous recommendations, we interviewed relevant CFTC staff, reviewed collection documents they provided and relied on CFTC’s Office of Inspector General’s (OIG) work. Specifically, to determine whether CFTC had complied with the Debt Collection Improvement Act of 1996 by referring delinquent debt to FMS, we relied primarily on CFTC OIG’s findings associated with this recommendation. In particular, we reviewed the OIG’s 2004 and 2001 audit reports and supporting work papers for our assessment of the timeliness of referrals. We also reviewed CFTC’s documents describing its collection workflow and processes and interviewed CFTC’s OIG staff and CFTC staff to discuss CFTC’s procedures on referring debt to FMS. Furthermore, to assess the actions CFTC has taken to address our recommendation on strengthening fingerprinting controls, we conducted our work on CFTC and SEC simultaneously. We obtained a copy of the draft for the new fingerprinting guidelines and reviewed them for additional controls to preclude inappropriate individuals from being admitted to the futures industry. Finally, to calculate SEC’s collection rates for CMPs and disgorgement and CFTC’s collection rates for CMPs and restitution, we requested data from each agency on the amount of these sanctions ordered from September 2002 through August 2004 and collected through December 2004. We chose September 2002 as the beginning of our time period in order to pick up where our 2003 report ended. As with our 2003 report, we limited our review to CMPs, disgorgement, and restitution ordered through August 2004 to allow SEC and CFTC through December 2004 (4 months) to attempt collections. Also consistent with our 2003 report, we calculated SEC’s and CFTC’s collection rates for all cases (open and closed cases) and closed cases only. For purposes of our calculation, we defined open cases as “cases with a final judgment order that remained open while collection efforts continued” and closed cases as “cases with a final judgment order for which collection actions were completed.” We relied on SEC and CFTC to categorize cases as being open or closed, consistent with the above definition. We did not independently verify either SEC or CFTC’s classification of a case as being open or closed. For data provided by both agencies, we performed basic tests of the data’s integrity, such as checks for missing records and obvious errors. We concluded that the data provided by SEC and CFTC, for purposes of this report, were sufficiently reliable. We conducted our work from August 2004 to August 2005 in Washington, D.C., in accordance with generally accepted government auditing standards. We calculated the Commodity Futures Trading Commission’s (CFTC) civil monetary penalties (CMP) and restitution collection rates to provide updated information on CFTC’s activities through December 2004. As in our 2003 report, we calculated CFTC’s collection rate for all cases (open and closed) and closed cases only. As shown in table 5, from September 2002 through December 2004, CFTC’s CMPs collection rate for all cases ranged from 38 percent to 100 percent and for closed cases only from 98 percent to 100 percent. Like the Securities and Exchange Commission (SEC), CFTC also imposed significantly larger amounts of CMPs from September 2002 through December 2004 compared with previous years. For example, during 2003 CFTC imposed about $137 million in CMPs, up from $15.6 million in 2002. According to CFTC officials, there were three reasons for the increase. First, in 2002, CFTC was reorganized to leverage the Enforcement’s investigation and litigation resources. This reorganization allowed the division to file more cases and ultimately it entered into an increased number of judgments imposing a penalty. Second, by 2003, the Enforcement division was engaged in an industrywide investigation of the energy sector concerning attempted manipulation and false reporting conduct, and settlements in these cases resulted in the imposition of approximately $250 million in CMPs. Third, following reauthorization in 2001, CFTC’s jurisdiction over investigations of foreign exchange fraud was clarified; since that time, CFTC has begun to file more actions in this area. In one case, according to CFTC officials, a court entered separate judgments against the named defendants, imposing approximately $75 million in CMPs. However, unlike SEC’s collection activity, CFTC’s collection rate for CMPs did not significantly increase over previous years. For example, from September 2002 through December 2004 CFTC’s CMPs collection rate for all cases was 46 percent. From January 1997 through August 2002, the agency’s collection rate was 45 percent. As shown in table 6, CFTC’s collection rate for restitution ranged from 4 percent to 8 percent for all cases and was 100 percent for closed cases only. In addition to the individual named above, Karen Tremba, Assistant Director, Emily Chalmers, Ronald Ito, Grant Mallie, Bettye Massenburg, Marc Molino, David Pittman, Carl Ramirez, Omyra Ramsingh, and Cheri Truett made key contributions to this report. | The Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) impose penalties, disgorgements, and restitution on proven and alleged violators of the securities and futures laws, respectively. GAO has issued a number of previous reports on agency collection efforts and made numerous recommendations for improvement. This report follows up on open issues from the previous reports and (1) discusses SEC's progress in improving its tracking of penalty and disgorgement collection data, (2) assesses the steps SEC has taken to improve collection program management, (3) evaluates SEC's implementation of the Fair Fund provision in the Sarbanes-Oxley Act of 2002, and (4) describes CFTC's actions to address previous GAO recommendations. In response to GAO's previous recommendations, SEC has taken positive steps to improve its tracking of collection data, such as discontinuing its use of an unreliable tracking system, modifying its existing Case Activity Tracking System (CATS) to capture financial data, and establishing a policy for improved data entry. GAO's review of 45 cases tracked in CATS revealed that SEC complied with its policy for improved data entry, a step that contributes to improving the overall reliability of SEC's collection data. However, GAO identified additional actions that SEC can take to enhance CATS's usefulness for key users, such as attorneys, collection monitors, and case management specialists in the Division of Enforcement. SEC is currently addressing this issue through a multiyear effort to comprehensively upgrade CATS. Agency officials estimate that the upgrade, which will be completed in phases, will be fully complete in 2008. SEC has also addressed some previous recommendations made to strengthen management of its collection program, such as increasing its collection staff and referring eligible delinquent cases to the Department of the Treasury's (Treasury) Financial Management Service (FMS) on a timely basis. However, SEC must take further steps to address other recommendations designed to enhance management's evaluation of program performance. During this review, GAO identified new issues that warrant SEC management attention. For example, although SEC has increased the number of staff devoted to collection efforts, the agency has neither developed a method to ensure that adequate and consistent supervision is provided to them, nor has it formally assessed whether its additional resources are being used effectively. SEC also has not developed a procedure by which to ensure that two key units, both responsible for tracking collection activity, are effectively communicating and coordinating with one another. Since implementing Section 308(a) of the Sarbanes-Oxley Act of 2002, (commonly known as the Fair Fund provision), SEC has instructed its staff to aggressively use the provision and estimates designating over $4.8 billion for return to harmed investors as a result of the provision's enactment. However, to date, only a small amount of the funds have been distributed. According to SEC, distribution is often a lengthy process that can be further complicated by external factors such as a pending criminal indictment on the violator. GAO also found that SEC lacked a reliable method by which to identify and collect data on Fair Fund cases. SEC took action to address this issue, but efforts were still in their early stages. SEC has yet to analyze the data it has collected in order to fully determine the provision's effectiveness in returning an increased fund amount to harmed investors. CFTC implemented both recommendations from previous GAO reports related to controls over fingerprinting procedures and timely referral of eligible delinquent cases to Treasury's FMS. |
Geothermal energy is literally the heat of the earth. This heat is abnormally high where hot and molten rocks exist at shallow depths below the earth’s surface. Water, brines, and steam circulating within these hot rocks are collectively referred to as geothermal resources. Geothermal resources often rise naturally to the surface along fractures to form hot springs, geysers, and fumaroles. For centuries, people have used naturally occurring hot springs as places to bathe, swim, and relax. More recently, some individuals have constructed buildings over these springs, transforming them into elaborate spas and resorts, thereby establishing the first direct use of geothermal resources for business purposes. Businesses have also established other direct uses of geothermal resources by drilling wells into the earth to tap the hot water for heating buildings, drying food, raising fish, and growing plants. Where the earth’s temperature is not high enough to supply businesses with geothermal resources for direct use, people have made use of the ground’s heat by installing geothermal heat pumps. Geothermal heat pumps consist of a heat exchanger and a loop of pipe extending into the ground to draw on the relatively constant temperature there for heat in the winter and air conditioning in the summer. Geothermal resources can also generate electricity, and this is their most economically valuable use today. Only the highest temperature geothermal resources, generally above 200 degrees Fahrenheit, are suitable for electricity generation. When companies are satisfied that sufficient quantities of geothermal resources are present below the surface at a specific location, they will drill wells to bring the geothermal fluids and steam to the surface. Upon reaching the surface, steam separates from the fluids as their pressure drops, and the steam is used to spin the blades of a turbine that generates electricity. The electricity is then sold to utilities in a manner similar to sales of electricity generated by hydroelectric, coal- fired, and gas-fired power plants. In the United States, geothermal resources are concentrated in Alaska, Hawaii, and the western half of the country, primarily on public lands managed by the Bureau of Land Management (BLM). The Congress set forth procedures in the Geothermal Steam Act of 1970 for leasing these public lands, developing the geothermal resources, and collecting federal royalties. Today, BLM leases these lands and sets the royalty rate, and the Minerals Management Service (MMS)—another agency within the Department of the Interior (DOI)—collects the federal geothermal royalties and disburses to the state governments its share of these royalties as required by law. In 2005, MMS collected $12.3 million in geothermal royalties, almost all of which was derived from the production of electricity. Geothermal resources currently account for about 0.3 percent of the annual electricity produced in the United States, or 2,534 megawatts— enough electricity to supply 2.5 million homes. Even though the percentage of electricity generated from geothermal resources is small nationwide, it is locally important. For example, geothermal resources provide about 25 percent of Hawaii’s electricity, 5 percent of California’s electricity, and 9 percent of northern Nevada’s electricity. As of January 2006, 54 geothermal power plants were producing electricity, and companies were constructing 6 additional geothermal power plants in California, Nevada, and Idaho that collectively will produce another 390 megawatts of electricity. Over half of the nation’s electricity generated from geothermal resources comes from geothermal resources located on federal lands in The Geysers Geothermal Field of northern California; in and near the Sierra Nevada Mountains of eastern California; near the Salton Sea in the southern California desert; in southwestern Utah; and scattered throughout Nevada. Industry and government estimates of the potential for electricity generation from geothermal resources vary widely, due to differences in the date by which forecasters believe the electricity will be generated, the methodology used to make the forecast, assumptions about electricity prices, and the emphasis placed on different factors that can affect electricity generation. Estimates published since 1999 by the Department of Energy, the California Energy Commission, the Geothermal Energy Association, the Western Governor’s Association, and the Geo-Heat Center at the Oregon Institute of Technology indicate that the potential for electrical generation from known geothermal resources over the next 9 to 11 years is from about 3,100 to almost 12,000 megawatts. A more comprehensive and detailed study of electricity generation from all geothermal resources in the United States was published in 1978 by the U.S. Geological Survey (USGS). This assessment estimated that known geothermal resources could generate 23,000 megawatts if all of them were developed. The USGS estimate is greater because it did not consider how much electricity could be economically produced, given competing commercial sources of electricity. In addition, the USGS estimated that undiscovered resources could generate an additional 72,000 to 127,000 megawatts. In short, geothermal resources that could generate electricity are potentially significant but largely untapped. In 2005, over 2,300 businesses and heating districts in 21 states used geothermal resources directly for heat and hot water. Nearly all of these are on private lands. About 85 percent of these users are employing geothermal resources to heat homes, businesses, and government buildings. While most users heat one or several buildings, some users have formally organized heating districts that pipe hot water from geothermal wells to a central facility that then distributes it to heat many buildings. The next most plentiful direct use application is for use by resorts and spas, accounting for over 10 percent of sites. About 244 geothermally heated resorts and spas offer relaxation and therapeutic treatments to customers in 19 states. Two percent of geothermal direct use applications consist of heated greenhouses in which flowers, bedding plants, and trees are grown. Another two percent of geothermal direct use applications are for aquaculture operations that heat water for raising aquarium fishes for pet shops; catfish, tilapia, freshwater shrimp and crayfish for human consumption; and alligators for leather products and food. Other direct use geothermal applications include dehydrating vegetables, like onions and garlic, and melting snow on city streets and sidewalks. The potential for additional direct use of geothermal resources in the United States is uncertain due to the geographically widespread nature of low-temperature geothermal resources and the many different types of applications. USGS preformed the first national study of low-temperature geothermal sites in 1982, but this study was not specific enough to identify individual sites for development. In 2005, the Geo-Heat Center at the Oregon Institute of Technology identified 404 wells and springs that might be commercially developed for direct use applications—sites that had the appropriate temperatures and are within 5 miles of communities. Geothermal heat pumps have become a major growth segment of the geothermal industry. They make use of the earth’s warmer temperature in the winter to heat buildings and use the earth’s cooler temperature in the summer for air conditioning. The Geothermal Heat Pump Consortium estimated that 1 million units were in operation in all 50 states as of January 2006. Because geothermal heat pumps are effective where ground temperatures are between 40 and 70 degrees F, they can be installed in almost any location in the United States and, therefore, constitute the most widespread geothermal application and represent the greatest potential for future development. The development of geothermal resources for electricity production faces major challenges, including high risk and financial uncertainty, insufficient transmission capacity, and inadequate technology. Geothermal groups reported that most attempts to develop geothermal resources for electricity generation are unsuccessful, that costs to develop geothermal power plants can surpass $100 million, and that it can take 3 to 5 years for plants to first produce and sell electricity. Although some geothermal resources are easy to find because they produce tell-tale signs such as hot springs, most resources are buried deep within the earth—at depths sometimes exceeding 10,000 feet—and finding them often requires an in- depth knowledge of the area’s geology, geophysical surveys, remote sensing techniques, and at least one test well. The risks and high initial costs associated with exploring for and developing geothermal resources limit financing. Moreover, few lenders will finance a geothermal project until a contract has been signed by a utility or energy marketer to purchase the anticipated electricity. Geothermal industry officials describe the process of securing a contract to sell electricity as complicated and costly. In addition, lack of available transmission creates a significant impediment to developing geothermal resources for electricity production. In the West where most geothermal resources are located, many geothermal resources are far from existing transmission lines, making the construction of additional lines economically prohibitive, according to federal, state, and industry officials. Finally, inadequate technology adds to the high costs and risky nature of geothermal development. For example, geothermal resources are hot and corrosive and often located in very hard and fractured rocks that wear out and corrode drilling equipment and production casing. Developing geothermal resources for direct use also faces a variety of business challenges, including obtaining capital, overcoming specific challenges unique to their industry, securing a competitive advantage, distant locations, and obtaining water rights. While the amount of capital to start a direct-use business that relies on geothermal resources is small compared to the amount of capital necessary to build a geothermal power plant, this capital can be substantial relative to the financial assets of the small business owner or individual, and commercial banks are often reluctant to loan them money. Challenges that are unique to certain industries include avoiding diseases in fish farms; combating corrosive waters used in space heating; and controlling temperature, humidity, and light according to the specifications of the various plant species grown in greenhouses. Even when overcoming these unique challenges, successful operators of direct use businesses may need to secure a competitive advantage, and some developers have done so by entering specialty niches, such as selling alligator meat to restaurants and constructing an “ice museum” in Alaska where guests can spend the night with interior furnishings sculptured from ice. Furthermore, developing direct uses of geothermal resources is also constrained because geothermal waters cannot be economically transported over long distances without a significant loss of heat. Even when these resources need not be moved, obtaining the necessary state water rights to geothermal resources can be problematic. In areas of high groundwater use, the western states generally regulate geothermal water according to some form of the doctrine of prior appropriations, under which specific amounts of water may have already been appropriated to prior users, and additional water may not be available. Developing geothermal power plants on federal lands faces additional challenges. Power plant developers state that the process for approving leases and issuing permits to drill wells and construct power plants has become excessively bureaucratic. BLM and Forest Service officials often have to amend or rewrite resource or forest management plans, which can add up to 3 years to the approval process. Delays in finalizing the resource and forest management plans and in conducting other environmental reviews have resulted in backlogs of lease applications in California and Nevada, particularly when the public has raised more environmental issues. Geothermal applications, permits, and environmental reviews are also delayed by a lack of staff and budgetary resources at the BLM state and field offices that conduct the necessary work and when BLM must coordinate with the Forest Service, which manages land in some project areas. In addition, developers of geothermal resources for both power plants and direct uses faced a challenging federal royalty system prior to the Energy Policy Act. While developers of geothermal power plants generally did not consider the federal royalty system to be a major obstacle in constructing a geothermal power plant, some described paying royalties as burdensome and reported expending considerable time and expense on royalty audits. On the other hand, some developers of geothermal resources for direct use stated that the federal royalty system was a major obstacle and no longer economically feasible. The Energy Policy Act of 2005 includes a variety of provisions designed to help address the challenges of developing geothermal resources, including the high risk and financial uncertainty of developing renewable energy projects and the lack of sufficient transmission capacity. Provisions within the Act address high risk and financial uncertainty by providing tax credits and other incentives. For example, starting on January 1, 2005, the Act extends for 10 years a tax credit on the production of electricity from geothermal resources for already existing plants and for any new plants producing by December 31, 2007. The Act also provides a financial incentive for tax-exempt entities, such as municipalities and rural electric cooperatives, by allowing the issuance of clean renewable energy bonds for the construction of certain renewable energy projects, including geothermal electricity plants. Investors can purchase the bonds, which pay back the original principal and also provide a federal tax credit instead of an interest payment. Another provision in the Act may decrease the high risk of geothermal exploration by directing the Secretary of the Interior to update USGS’s 1978 Assessment of Geothermal Resources, which is in need of revision because significant advancements in technology have occurred since its publication. The Act addresses transmission challenges by providing the Federal Energy Regulatory Commission (FERC) with new authorities in permitting transmission facilities and in developing incentive-based rates for electricity transmission in interstate commerce. FERC can now approve new transmission lines in certain instances when a state fails to issue a permit within 1 year of a company’s filing of an application, and companies that acquire FERC permits for transmission facilities can acquire rights of way through eminent domain proceedings. In November 2005, FERC initiated the rulemaking process for establishing these rates. State governments are also addressing the financial uncertainty of developing renewable energy projects by creating additional markets for their electricity through Renewable Portfolio Standards (RPS). An RPS is a state policy directed at electricity retailers, including utilities, that either mandates or encourages them to provide a specific amount of electricity from renewable energy sources, which may include geothermal resources. To date, 22 states plus the District of Columbia have RPSs, and three other states have set RPS targets, although not all states have significant geothermal resources. Additional state programs also provide tax credits and other financial incentives for renewable energy development, including electricity generation from geothermal resources. These incentives include property tax incentives, sales tax incentives, and business tax credits. To address technological challenges, the state of California and the Department of Energy provide financial assistance and grants to the geothermal industry. California’s Geothermal Resources Development Account competitively awards grants to promote research, development, demonstration, and commercialization of geothermal resources. California’s Public Interest Energy Research Program also funds awards for renewable resource projects, including geothermal projects. On the federal side, the Department of Energy’s Geothermal Technologies Program competitively awards cost-sharing grants to industry for research and development. In the past, program funds have been used to pioneer new drill bits, demonstrate the large scale use of low-temperature geothermal resources to generate electricity, produce new seismic interpretation methods, commercialize geothermal heat pumps, develop slimhole (reduced diameter) drilling for exploration, and produce a strategy for reinjection at The Geysers Geothermal Field. The program’s budget was $23 million in fiscal year 2006. However, the President’s budget contains no funding for fiscal year 2007, and the House’s proposal for fiscal year 2007 is to appropriate a substantially reduced amount of $5 million. In contrast to these funding decisions, the Senate Energy and Water Appropriations Subcommittee just recently approved a budget of $22.5 million for geothermal research and development. While the future impacts of reduced or eliminated funding for geothermal technology is uncertain, industry representatives believe that this funding is necessary to address the near-term need to expand domestic energy production and the long-term need to find the breakthroughs in technology that could revolutionize geothermal power production. The Energy Policy Act also contains provisions aimed at addressing the challenges of developing geothermal resources on federal lands. Specific provisions are aimed at streamlining or simplifying the federal leasing system, combining prospective federal lands into a single lease, and improving coordination between DOI and the Department of Agriculture. The Act also requires the Secretary of the Interior and the Secretary of Agriculture to enter into a memorandum of understanding that establishes an administrative procedure for processing geothermal lease applications and that establishes a 5-year program for leasing of Forest Service lands and reducing its backlog of lease applications, as well as establishing a joint data retrieval system for tracking lease and permit applications. Finally, the Act also contains provisions that simplify and/or reduce federal geothermal royalties on resources that generate electricity and on resources put to direct use. MMS is in the early stages of implementing these provisions, and hence it is too early to assess their overall effectiveness. A royalty provision of the Energy Policy Act redistributes the federal royalties collected from geothermal resources—cutting in half the overall geothermal royalties previously retained by the federal government. Established by the Geothermal Steam Act of 1970, as amended, the prior distribution provided that 50 percent of geothermal royalties be retained by the federal government and the other 50 percent be disbursed to the states in which the federal leases are located. While the Energy Policy Act continues to provide that 50 percent of federal geothermal royalties be disbursed to the states in which the federal leases are located, an additional 25 percent will now be disbursed to the counties in which the leases are located, leaving only 25 percent to the federal government. The Act also changes how the federal government’s share of geothermal royalties can be used. Prior to passage of the Act, 40 percent of the federal government’s share was deposited into the reclamation fund created by the Reclamation Act of 1902, and 10 percent was deposited into the general fund of the Department of the Treasury. For the first 5 fiscal years after passage of the Act, the federal government’s share is now to be deposited into a separate account within the Department of the Treasury that the Secretary of the Interior can use without further appropriation and fiscal year limitation to implement both the Geothermal Steam Act and the Energy Policy Act. While, for most leases, the Energy Policy Act directs that the Secretary of the Interior seek to maintain the same level of royalty revenues as before the Act, our analysis suggests that this will be difficult because changing electricity prices could significantly affect the percentage of future royalty revenues collected. Electricity prices are not possible to predict with certainty, and as discussed below, changing prices could significantly impact royalty revenues because electricity sales account for about 99 percent of total geothermal royalty revenues. The Act contains provisions for each of three specific types of leases that generate electricity: (1) leases that currently produce electricity, (2) leases that were issued prior to passage of the Act and will first produce electricity within 6 years following the Act’s passage, and (3) leases that have not yet been issued. For leases that currently produce electricity, future geothermal royalty revenues will depend on electricity prices. The Act specifies that the Secretary of the Interior is to seek to collect the same level of royalties from these leases over the next 10 years as it had before the Act’s passage but under a simpler process. Prior to passage of the Act, lessees of most geothermal electricity projects paid federal royalties according to a provision within MMS’s geothermal valuation regulations referred to as the “netback process.” To arrive at royalties due under this process, lessees are to first subtract from the electricity’s gross sales revenue their expenses for generation and transmission and then multiply that figure by the royalty rate specified in the geothermal lease, which is from 10 to 15 percent. The Act simplifies the process by allowing lessees, within a certain time period, the option to request a modification to their royalty terms if they were producing electricity prior to passage of the Act. This modification allows for royalties to be computed as a smaller percentage of the gross rather than the net sales revenues from the electricity so long as this percentage is expected to yield total royalty payments equal to what would have been received before passage of the Act. Royalty revenues from a geothermal lease currently producing electricity will remain the same if the lessee elects not to convert to the new provision of the Act. On the other hand, if the lessee converts to the new provision, royalty revenues should remain about the same only if DOI negotiates with the lessee a future royalty percentage based on past royalty history and if electricity prices remain relatively constant. If royalties are based on historic percentages of gross sales revenues and electricity prices increase, however, royalty revenues will actually decrease relative to what the federal government would have collected prior to passage of the Act. The federal government will receive less revenue under this situation because expenses for generation and transmission do not increase when electricity prices increase, and the higher royalty rate specified in the lease is not applied to the increase in sales revenues. For the second type of lease—leases that were issued before the Act and that will first produce electricity within 6 years after the Act’s passage— royalty revenues are likely to drop somewhat because lessees are likely to take advantage of an incentive within the Act. The Act allows for a 50 percent decrease in royalties for the first 4 years of production so long as the lessee continues to use the netback process. Because of the substantial reduction in royalties, it is likely that lessees owning leases issued before passage of the Act will elect to pay only 50 percent of the royalties due on new production for the 4- year period allowed by the Act. This incentive also applies to sales revenues from the expansion of a geothermal electricity plant, so long as the expansion exceeds 10 percent of the plant’s original production capacity. Owners of geothermal electricity plants currently paying royalties under the netback process may elect to take the production incentive for new plant expansions if they perceive that the royalty reduction is worth the additional effort and expense in calculating payments under the netback process and worth the possibility of being audited. It is difficult to predict exactly how royalty revenue from the third type of lease—leases that have not yet been issued—will change, but it appears that revenue impacts are likely to be minor, based on our review of historic royalty data. The Act specifies that the Secretary of the Interior should seek to collect the same level of royalty revenues over a 10-year period as before passage of the Act. The Act also simplifies the calculation of royalty payments by providing that, for future leases, royalties on electricity produced from federal geothermal resources should be not less than 1 percent and not greater than 2.5 percent of the sales revenue from the electricity generated in the first 10 years of production. After 10 years, royalties should be not less than 2 percent and not greater than 5 percent of the sales revenue from the electricity. Our analysis of data for seven geothermal projects showed that lessees were paying a wide range of percentages after 10 years of production—from 0.2 to 6.3 percent. Three of the seven projects paid under the minimum 2 percent royalty rate prescribed in the Act, suggesting that some projects in the future could pay more under the Act’s new provisions than they would otherwise have paid. On the other hand, one project paid greater than the maximum 5 percent prescribed in the Act, suggesting that it is possible for a plant to pay less in the future than it would otherwise have paid. However, neither the amount that the one plant would have overpaid nor the amounts that the three plants would have underpaid are significant. Even though provisions of the Energy Policy Act may decrease royalties on direct use applications, the impact of these provisions is likely to be small because total royalty collections from direct use applications are minimal. In fiscal years 2000 through 2004, MMS reported collecting annually about $79,000 from two direct use projects, or less than 1 percent of total geothermal royalties. While a provision of the Act may encourage the use of federal geothermal resources for direct use by lowering the federal royalty rate, we believe based on challenges facing developers that it is unlikely that this royalty incentive alone will stimulate substantial new revenues to compensate for the loss in revenue due to the lower royalty rate. We believe that in order to substantially increase the development of federal direct use applications, developers must overcome the relatively high capital costs for investors, unique business challenges, and water rights issues. Finally, MMS does not routinely collect data from the sales of electricity that are necessary to demonstrate that MMS is seeking to maintain the same level of royalty collections from geothermal resources, as directed by the Energy Policy Act. For most geothermal leases, MMS will need to calculate the percentage of gross sales revenues that lessees will pay in future royalties from electricity sales and compare this to what lessees would have paid prior to the Act. However, MMS does not routinely collect these data. Accordingly, we are recommending that the Secretary of the Interior instruct the appropriate managers within MMS to collect from royalty payors the gross sales revenues from the electricity they sell. MMS has agreed to do so. The Energy Policy Act of 2005 addresses a wide variety of challenges facing developers of geothermal resources. The Act incorporates many of the lessons learned by state governments and federal agencies in an attempt to provide financial incentives for further development and make federal processes more efficient. However, the Act was only recently adopted, and insufficient time has passed to assess its effectiveness. Several of the Act’s major provisions will be left to the federal agencies within DOI for implementation, and the drafting and public comment period for regulations that implement these provisions will not occur overnight. Agencies will also need to spend considerable time and effort in working out the details for implementation and securing the necessary budgets. Hence, the fate of a significant portion of our nation’s geothermal resources depends on the actions of these federal agencies. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact me, Jim Wells, at 202-512-3841 or [email protected]. Contributors to this testimony include Ron Belak, John Delicath, Dan Haas, Randy Jones, Frank Rusco, Anne Stevens, and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Energy Policy Act of 2005 (Act) contains provisions that address challenges to developing geothermal resources, including the high risk and uncertainty of developing geothermal power plants, lack of sufficient transmission capacity, and delays in federal leasing. Among the provisions are means to simplify federal royalties on geothermal resources while overall collecting the same level of royalty revenues. This testimony summarizes the results of a recent GAO report, GAO-06-629 . In this testimony, GAO describes: (1) the current extent of and potential for geothermal development, (2) challenges faced by developers of geothermal resources, (3) federal, state, and local government actions to address these challenges, and (4) how provisions of the Act are likely to affect federal geothermal royalty disbursement and collections. Geothermal resources currently produce about 0.3 percent of our nation's total electricity and heating needs and supply heat and hot water to about 2,300 direct-use businesses, such as heating systems, fish farms, greenhouses, food-drying plants, spas, and resorts. Recent assessments conclude that future electricity production from geothermal resources could increase by 25 to 367 percent by 2017. The potential for additional direct-use businesses is largely unknown because the lower temperature geothermal resources that they exploit are abundant and commercial applications are diverse. One study identified at least 400 undeveloped wells and hot springs that have the potential for development. In addition, the sales of geothermal heat pumps are increasing. The challenges to developing geothermal electricity plants include a capital-intensive and risky business environment, technological shortcomings, insufficient transmission capacity, lengthy federal review processes for approving permits and applications, and a complex federal royalty system. Direct-use businesses face numerous challenges, including challenges that are unique to their industry, remote locations, water rights issues, and high federal royalties. The Act addresses many of these challenges through tax credits for geothermal production, new authorities for the Federal Energy Regulatory Commission, and measures to streamline federal leasing and simplify federal royalties, which totaled $12.3 million in 2005. In addition, the Department of Energy and the state of California provide grants for addressing technology challenges. Furthermore, some state governments offer financial incentives, including investment tax credits, property tax exclusions, sales tax exemptions, and mandates that certain percentages of electricity within the state be generated from renewable resources. Under the Act, federal royalty disbursement will significantly change because half of the federal government's share will now go to the counties where leases are located. Although the Act directs the Secretary of the Interior to seek to maintain the same level of royalty collections, GAO's analysis suggests this will be difficult because changing electricity prices could significantly affect royalty revenues. Finally, MMS does not collect sales data that are necessary to monitor these royalty collections. |
FAA’s mission is to provide a safe and efficient airspace system. As part of this mission, the agency uses airport system planning to better understand the interrelationship of airports at the national, state, and regional levels. FAA guidance states that the overall goals of airport system planning are to ensure that the air transportation needs of a state or metropolitan area are adequately served by its airports, and that planning results in products that can be used by the planning organization, airports, and FAA to determine future airport development needs. There are several types and levels of planning involving individual airports or airport systems, including the National Plan of Integrated Airport Systems (NPIAS), state and regional system plans, and airport-level plans. The NPIAS identifies over 3,400 airports as being nationally significant to the national airspace system, including all of the nation’s commercial service and reliever airports and some general aviation airports. Most states periodically develop state airport system plans to inventory airports using a set of criteria developed by FAA. While not required, some regions choose to carry out regional airport planning—which may include the development of regional airport system plans (RASP) or other regional airport plans—to identify critical regional airport issues and to integrate aviation with other modes in a region’s transportation system. At the airport level, two types of plans support airport improvements at individual airports, the airport layout plan (ALP), which is required for federal funding, and the airport master plan. Figure 1 provides additional information about these plans and illustrates the role of each in the FAA in the FAA funding process for airport improvement projects in the AIP. funding process for airport improvement projects in the AIP. Airports in the NPIAS become eligible to apply for FAA’s AIP grants, which provided almost $3.5 billion for capital projects in fiscal year 2008. AIP funding is available for eligible projects, which include projects such as airfield construction or equipment purchases, terminal or terminal access improvements, land acquisition, noise compatibility projects, and regional airport planning. AIP grants generally consist of two types— entitlement funds that are apportioned to airports or states by formula each year based upon statutory criteria, and discretionary funds that FAA approves based on a project's priority. To ensure that the highest priorityprojects nationally are funded, discretionary funds are awarded using a national priority rating system that awards points on a variety of factors including airport size; the purpose of the project (e.g., capacity related, planning, environmental, and safety); and the type of project (e.g., termi improvement and equipment purchase). Airports apply directly to FAA through FAA regional offices for AIP discretionary funding, an projects are scored using the national priority rating system. Furthermore, the Airport and Airway Improvement Act of 1982 (AAIA)— which established the current AIP—provided FAA with the authority to give priority to airport improvement projects that are consistent with integrated airport system plans, such as RASPs. In the guidance provided by FAA for airport system planning, airport sponsors are also encourage d to use findings and recommendations from regional airport plannin develop plans to serve as a guideline for the allocation of funding. While no specific amount is currently set aside for system planning in the AIP program, approximately 2 percent of funds made available annually for AIP grants since 1970 have been used for these purposes. Most of this funding is used for planning at the state or airport level, but some re gions have also applied for and received AIP funding for regional airport planning. This funding has been used for a variety of planning efforts by states, airport sponsors, and regional planning bodies—primarily MPOs— and includes the development of RASPs. Other regional airport planning funded with AIP grants includes special studies to analyze or address new or unique issues, such as compatible land uses around airports, zoning implementation, or airport ground access. There are a number of stakeholders with interests in the airport planning process. They include FAA, states, and airports and may also include MPOs, airlines, and local communities. The FAA’s Office of Airport Planning and Programming provides guidance about airport system planning, while FAA regional offices administer grants and provide technical support to airports and others developing airport plans at the airport, regional, and state levels. The range of involvement by a particular stakeholder group varies by the type of plan under development, among other things. Thus, FAA, airports, and sometimes airlines are typically most involved in the development of ALPs and airport master plans and the resulting capital plans. States work with airports—notably, general aviation or reliever airports, not typically major commercial airports—to identify airports and improvements for inclusion in state airport system plans. MPOs may work with airport sponsors, local jurisdictions, state authorities, and FAA when developing RASPs or carrying out other regional airport planning. FAA accepts plans developed by states or MPOs and reviews and approves ALPs. In addition to federal and state aviation officials, other stakeholders in the process include the following: Airport sponsors: Airport sponsors can be any one of a number of different types of public entities, such as cities, counties, airport authorities, ports, intermodal agencies, or private owners. MPOs: MPOs may lead or participate in regional airport planning, but their primary role is carrying out regional surface transportation planning in urbanized areas, including the development of long-range and short-range transportation plans. To receive federal surface transportation funding, any project in an urbanized area must emerge from the relevant MPO and state department of transportation planning process. Airlines: Airlines play a key role in the functioning of airport systems, since they make decisions about which airports to serve and how frequently to provide service. Airlines may consider a number of factors in making these decisions, such as the location of regional business, economic indicators, the travel patterns of area residents, the cost of establishing service at particular airports, the effects on their service network, and the service provided by competing carriers. FAA guidance on airport system planning identifies eight key elements of the planning process, including inventorying the airport system, identifying air transportation needs, considering alternative airport systems, and preparing an implementation plan (see table 1). The guidance states that the end result should be “the establishment of a viable, balanced, and integrated system of airports to meet current and future demand.” FAA does not approve airport master plans, state airport system plans, or RASPs. For those plans developed with FAA funding, however, FAA is involved in developing the scope of work covered under the grant, reviewing draft documents, approving aviation forecasts, and then accepting the final plan. When considering alternative airport systems (the fifth of the eight elements), regional planners may identify alternate, underutilized airports in a region as having the potential to relieve pressure on congested airports. FAA’s airport system planning guidance states that the development of such alternate airports should only be undertaken when a full assessment has been done of various market factors. The guidance states that it is important to understand the nature of demand within a region, including factors that would divert demand to other airports, and any potential political, economic, or institutional barriers to developing an airport system. It also recommends that planners assess the ability of the airport to offer adequate service—in terms of convenience, schedules, and fares—and the effect on airlines, noting that the development of alternate airports should enhance airline profitability and be compatible with their route systems. In addition to the development of RASPs, other types of regional airport planning, including special studies whose scope of work does not fully correspond with the elements described in the airport system planning guidance, may be undertaken with AIP grants, according to FAA’s airport system planning guidance. Special studies can include but are not limited to work in such areas as air service, air cargo operations, environmental or drainage inventories, surface access, economic impact, obstruction analysis or photogrammetry, general aviation security, and pavement management. FAA’s airport system planning guidance states that MPOs can receive FAA support to conduct regional airport planning in areas that include large- or medium-hub airports (1) when such agencies have the interest in and capabilities to conduct such planning and (2) when regional FAA, state aviation, and local airport officials determine that MPOs should have a role. The guidance continues that the regional airport planning carried out by MPOs should complement—rather than guide—the planning done by FAA, states, and individual airports. According to the guidance, MPO-led regional airport planning may enhance the integration of the entire regional transportation system by promoting aviation enhancement and preservation, identifying critical regional aviation issues, and acting as the contact point for regional surface access, air quality, and land-use planning studies. MPOs can also act as a catalyst in implementing system planning recommendations—which may involve several stakeholders—by resolving local conflicts, promoting airport development funding priorities, and proposing the distribution of grants among eligible projects. The guidance states that an MPO’s ability to implement regional airport planning recommendations is limited to the extent that it can influence airport development through persuasion; leadership; or nonaviation incentives, such as surface transportation improvements that may improve airport access. This stands in contrast to state aviation agencies, which can implement system planning recommendations using legislative and funding mechanisms, including AIP funds, whereas MPOs do not receive AIP funds, other than for planning purposes. FAA’s FACT 2 report forecast that 14 airports will be significantly capacity constrained—and thus potentially congested—by 2025, even if currently planned improvements are carried out. According to FAA, some airports are already significantly capacity constrained, and increased demand is expected to increase delays going forward. Six of these 14 airports will be significantly capacity constrained as early as 2015, according to the report. (See fig. 2.) The FACT 2 study was designed to produce a conservative list of congested airports, according to FAA officials, and identified those airports that will have the greatest need for future additional capacity. FAA officials noted that airports not designated as capacity constrained by the study may also have capacity issues in the future and may need capacity-enhancing projects. (See app. II for a discussion of the FACT 2 report and implications of its design.) The demand forecasts included in FACT 2, however, were conducted before 2007 and do not take into account the reduction in demand resulting from the recent economic downturn. As a result, potential capacity constraints may occur on a different timeline than previously forecast. The improvements considered in the 2025 and 2015 forecasts include those in FAA’s OEP, such as new or extended runways, changes or improvements in air-traffic control procedures and technology, and airspace redesign. Some NextGen improvements, such as reduced separation requirements for arrivals and departures, were included in the 2025 analysis for the 35 airports included in the OEP program and Oakland International Airport. If planned improvements do not occur, the FACT 2 report predicted that the number of airports that will be significantly capacity constrained will increase to 27 by 2025. Likewise, 18 airports were predicted to need additional capacity by 2015, if planned improvements do not occur. Figure 3 shows the airports predicted by FACT 2 to face significant capacity challenges in 2015 and 2025, if planned improvements do not occur. The NextGen program is intended to transform the nation’s navigation system into a satellite-based system, but faces challenges to implementation for both airlines and FAA. Benefits from the program are expected to include increased safety with a reduction in the number of runway incursions; greater design flexibility with the reduction of separation requirements between runways, which may allow for new runways or improved airport layouts; better use of existing capacity with reduced separation standards for aircraft and improved access to airports with mountainous terrain or other obstacles; and reduced environmental impacts since aircraft will be able to descend using the shortest routes at minimum power settings. As we have previously reported, FAA has made some progress in implementing the NextGen program, but still faces some challenges. For example, aircraft operators must purchase equipment to implement NextGen capabilities, but some airlines have been reluctant to do so until FAA specifies requirements, addresses funding concerns, and demonstrates benefits. FAA must also determine that new technologies will operate in a real-life environment with a desired level of confidence and approve their use as well as issue rules for the use of procedures before midterm implementation can occur. Finally, the transformation to NextGen will also depend on the ability of airports to handle greater capacity. Since runways and airspace issues are not the only causes of congestion, improved efficiency in these areas—which may result from implementation of NextGen improvements—may exacerbate capacity constraints involving taxiways, terminal gates, or parking areas. There are 4 airports that were already considered capacity constrained under the FACT 2 methodology, including 2 in the New York/New Jersey region—Newark Liberty International (Newark) and LaGuardia (LaGuardia)—as well as Chicago’s O’Hare International (O’Hare) and Fort Lauderdale/Hollywood International in Southern Florida. In the New York region, FAA has set limitations on the number of takeoffs and landings during peak operating hours at Newark, John F. Kennedy International Airport (JFK), and LaGuardia, to minimize congestion and reduce flight delays. However, these airports are still routinely found to be among the most congested in the country and are on FAA’s list of airports needing additional capacity by both 2015 and 2025. Improvements at O’Hare and Fort Lauderdale/Hollywood International will take them off the list of significantly congested airports by 2015, according to the FACT 2 report. All 14 of the airports forecast by FAA as needing additional capacity by 2025 or 2015 are located in major metropolitan areas with at least 1 large- hub airport. Nine of the airports forecast to be congested are in regions with more than 1 large- or medium-hub airport. Each of the airports identified as potentially capacity constrained in 2015 is also included on the list for 2025. For the purposes of our review, we focused on the 10 metropolitan regions that include the 14 airports forecast by the FACT 2 report to be significantly capacity constrained by 2025, assuming planned improvements occur. (See table 2.) Developing new airport capacity can be costly, complex, and time- consuming. Historically, airports, metropolitan regions, and FAA have looked to airport expansion and facility improvements—such as the construction of new runways—to provide new capacity, but increasingly airport expansion faces obstacles, especially in congested regions. Through the cooperative efforts of the aviation industry, airports, and FAA, 20 airfield projects have opened since 2000 at 18 OEP airports, including new runways at O’Hare, Seattle-Tacoma International, and Washington Dulles International in 2008. However, projects involving new runways often take a decade or more to complete because of legal and other obstacles. In addition, the last major new commercial service airport in the United States was opened in Denver in 1995 and is 1 of only 2 new major airports built in over 40 years. That said, proposals for a new airport in Peotone, Illinois, in the Chicago region and for a new airport to supplement Las Vegas McCarran International Airport are currently in the early stages of FAA environmental review. Going forward, the development of new infrastructure—including the construction or extension of runways as well as new airports—faces many challenges. FACT 2 points out that expanding airport capacity is unlikely in some locations. According to ongoing research being developed for the Airport Cooperative Research Program (ACRP), adverse community reaction to aircraft noise and pollutant emissions at and near major airports continues to impede the development of new airport infrastructure, and this resistance is unlikely to decrease. Another study noted that lawsuits are filed in opposition to virtually every expansion of a major airport, generally challenging the right of airport officials to override local zoning rules or increase noise or air pollution. According to this study, while such legal challenges are usually unsuccessful, projects often take longer than originally anticipated. We have also previously reported that new runway construction from initial planning to completion takes a median of 10 years, but delays can add an additional 4 years to the median time. While we found that the level of challenges that airports faced varied, in part depending on the proximity of the airport to a major city and the amount of community opposition to the runway, some common themes emerged in our 2002 survey of airports that had built or planned to build runways between 1991 and 2010. Challenges identified by those airports included reaching stakeholder agreement on the purpose and need for the new runway, completing required environmental reviews, reaching agreement on how to mitigate the impact of noise and other issues, and designing and constructing the runways in light of weather and site preparation issues. The conversion of former or joint-use military airfields for civilian use is an alternate approach to providing new or additional capacity, but this approach has also faced obstacles similar to those posed with the construction of new facilities. Voters recently rejected the proposed conversion of military airfields at Miramar and El Toro, current and former Marine Corps air stations, respectively. In our discussions with regional and airport officials, we found that environmental constraints, including land-use issues or community concerns about airport noise or the redesign of airspace around congested airports; physical constraints; and local legal constraints are also obstacles to the development of new capacity through airport or runway expansion. Environmental issues have been a constraint on development in the San Francisco region at San Francisco International Airport (SFO) and at Oakland International Airport, for example, where the construction of new runways would involve extensive filling in the San Francisco Bay. A proposal to build a new runway at SFO was dropped due to environmental issues and cost constraints. As conceived, the project would have been the largest construction project in the bay for over 50 years and would have involved dredging and filling up to 2 square miles of the bay. (Fig. 4 shows the 2000 proposal for construction at SFO.) More recent planning has not included runway construction, focusing instead on a terminal development program and other alternatives. Noise concerns have also been a limiting factor for many airports. Proposals for runway expansion in Philadelphia led to a lawsuit filed by surrounding communities seeking to block the development, for example. Likewise, officials at SFO pointed to encroaching neighborhoods as state land-use policies encourage the development of previously industrial areas. Efforts to redesign the airspace around the New York/New Jersey/Philadelphia region also led to community opposition, with several surrounding communities filing lawsuits that, thus far, have been resolved in favor of FAA. Physical constraints on expansion or new construction can also be obstacles. For example, San Diego International has one runway, sits on only 661 acres, and the surrounding terrain limits the slope for departing aircraft, particularly heavier aircraft. The San Diego County Regional Airport Authority is developing a proposal to reconfigure the airport’s terminals, given the lack of room for a new runway. Finally, legal agreements or requirements hamper the use of existing capacity at some airports, including those in the Los Angeles region—in Orange County and Long Beach. Westchester County Airport in White Plains, New York, also has legal limits on airport operations, according to an air service demand study. Other airports have community agreements limiting capacity or growth. For example, Los Angeles International Airport (LAX) has imposed a cap of 78.9 million annual passengers on its operations as part of a settlement agreement with surrounding communities, according to regional officials. Likewise, according to an airport official, Bob Hope Airport is prevented from expanding the footprint of its existing terminal until 2012 by an agreement with the City of Burbank. The airport also recently sought FAA approval to make a voluntary nighttime curfew permanent. This application was denied by FAA, however, based in part on concerns that the curfew would result in congestion and delay in the region and potentially have ripple effects throughout the national airspace system. Regional airport planning can identify solutions for airports and regions seeking to determine how best to manage available capacity and address the challenges posed by congestion. A 2003 study for the Office of the Assistant Secretary for Transportation Policy at the Department of Transportation looked at the potential for alternative airports to meet regional capacity needs and found that the use of these airports can make more efficient use of existing resources and better use of limited funds for airport development. According to the report, to make better use of alternate airports, regional airport planning should focus on both airport development and access issues. The study concluded that as metropolitan areas grow and become more congested and complex, FAA needs to promote regional airport planning. Likewise, according to ongoing research being developed for the ACRP, there are important opportunities to improve aviation system capacity and airport operations by embracing more collaborative and cooperative regional airport planning. The research has found that proactively seeking ways to use commercial airport capacity more efficiently will be important to maintaining the viability of air travel while accommodating forecast growth in demand for air travel. According to the research, airport managers and governing bodies will need to embrace the concept of capacity sharing with other airports in their market areas to maintain this viability and accommodate demand and will also need to look at other potential approaches. Such approaches may include the expansion of high-speed rail in some corridors or the use of demand-management strategies, such as peak pricing or restrictions on the use of congested airports by smaller aircraft. FAA’s FACT 2 report and its 2009–2013 FAA Flight Plan also noted the potential for regional airport planning to identify options to relieve congestion. The FACT 2 report identified regional options that could help meet the future capacity needs of the nation’s airports, among them, continuing to study regional traffic and development alternatives and planning for high-density corridors and multiple modes, including high- speed rail. Likewise, one of the initiatives in the Flight Plan is the use of AIP funding to reduce capacity constraints and provide greater access to alternate airports in the metropolitan areas and corridors where congestion at primary airports creates delays throughout the national airspace system. Finally, FAA’s NextGen program identifies regional airports as having potential to provide additional capacity in 15 metropolitan areas, including Atlanta, Charlotte, Chicago, Houston, Las Vegas, Los Angeles, Minneapolis, New York, Philadelphia, Phoenix, San Diego, San Francisco, Seattle, South Florida, and Washington/Baltimore. Nine of the 10 regions forecast by FAA to have one or more significantly congested airports in 2025 received FAA funding from 1999 through 2008 in support of regional airport planning (see table 3). In all, FAA provided $34 million in AIP grants for metropolitan system planning during this period, and the 9 aforementioned regions received $20 million of the total. According to FAA’s AIP Handbook, metropolitan areas are eligible for funding under FAA’s AIP program if airport problems in the region require a higher level of effort to address them than would be provided as part of a statewide analysis. Such regional problems typically arise in association with large- or medium-hub airports, according to the handbook. Each of the 10 regions forecast to be significantly capacity constrained by 2025 had at least one airport categorized as a large hub in 2008. Since 1999, 6 of the 10 regions with airports that are forecast to be congested by 2025 have developed or are developing RASPs, including Los Angeles, Philadelphia, Phoenix, San Diego, San Francisco, and South Florida. Each of these regions has received one or more FAA grants for regional planning since 1999. The majority of these plans were developed or are being developed under the leadership of the local MPO, although in San Diego and Florida the airport sponsor and the state department of transportation, respectively, assumed leadership roles. Five regions have completed RASPs since 2000, and 2 are in development. Table 4 provides information about the RASPs developed or being developed in the 6 regions. Based on our review, the completed RASPs largely reflect the elements laid out for system planning by FAA and generally contain information about the airport system, forecast information, and a discussion of transportation needs, among other elements. In addition, most of the completed RASPs contained recommendations or strategies regarding the role of regional airports and potential airport improvements. Each of the regions that have completed or are completing RASPs also considered alternative modes of transportation as a means to alleviating airport congestion. FAA guidance for airport system planning discusses alternative modes of transportation, but does so only in the context of improving airport access. The MPO in the Los Angeles region has modeled the potential impacts of high-speed rail. According to ongoing research being developed for the ACRP, this modeling work demonstrated that development of a high-speed rail system would likely result both in the increased use of alternate regional airports—which would be linked to metropolitan centers by the new rail lines—for passenger service and cargo and in air-rail substitution by some passengers as they chose to take the train in lieu of flying. Likewise, San Diego has used its regional airport planning process to identify intermodal solutions. The airport sponsor worked with the region’s MPO to develop a new plan for Sa n Diego International Airport, which includes considerations of an intermodal facility at the airport. The new RASP is also being developed in concert with an air-rail study being undertaken by the MPO, which aims to explore improved access to alternative regional airports and the po diversion of passengers to high-speed rail. We found that the extent of regional airport planning undertaken in the four regions forecast to have significantly congested airports that have not developed RASPs—Atlanta, Chicago, Las Vegas, and New York—varied and was focused on individual airports. The regional airport planning that was undertaken in these regions was typically not led by regional planners in MPOs. Airport sponsors (in the Atlanta, Las Vegas, and New York regions) or state authorities (in Chicago) led efforts, with planning limited to the airports under their direct authority. All of these regions except Chicago have received funding from FAA for regional airport planning, with amounts ranging from nearly $3 million for JFK in the New York region—where the Port Authority of New York and New Jersey (Port Authority) carries out planning for its 5 airports—to $200,000 each in Atlanta and Las Vegas. Table 5 provides information about the range of regional airport planning in regions with airports forecast to be significantly congested that have not prepared RASPs, the leadership of these activities, and funding received from FAA. While regional airport planning has been undertaken in each of the regions forecast to have significantly congested airports, FAA has used the results of this planning selectively when working with airports or making funding decisions. In each of the five potentially congested regions we visited, FAA regional officials stated that they may look at RASPs or other regional airport plans when reviewing projects at individual airports. FAA regions, however, do not carry out a systematic review of RASPs to ensure that they meet the guidance for airport system planning, and none of the FAA regions we spoke with regularly used them in decision making when funding airport improvements, despite the potential identified by FAA and others for RASPs to identify potential options to alleviate congestion. For example, FAA officials in the Western-Pacific Region stated that capital investment decisions are made on the basis of airport master plans or airport layout plans. The officials noted that RASPs can serve as a tiebreaker among projects, but that funding decisions are made using national-level priorities. FAA officials in the Eastern Region also stated that they did not refer to RASPs when selecting projects for AIP funding, although they would assume that regional forecasts and airport roles would be reflected in airport master plans. As in the Western-Pacific Region, we were told that RASPs might be used to resolve tiebreakers for competing projects. Airport officials in the regions we selected told us that no RASP to date had been adopted into the airport-level capital improvement plans— airport layout or airport master plans—that guide decision making. For example, airport officials in Philadelphia stated that regional airport planning, including the RASP, has little influence on decisions made by the City of Philadelphia or by Philadelphia International Airport. Officials at other airports, however, said that these plans may be considered during airport-level planning. In the Los Angeles region, airport officials at John Wayne Airport in Orange County, for example, stated that while they may consider the RASP when making decisions about airport improvements, it is not the primary driver for these decisions because, in their view, regional and airport priorities necessarily differ. By contrast, the airport sponsor of LAX has pursued suggestions or strategies from RASPs when making decisions regarding airport improvements or capacity. Los Angeles World Airports, which operates LAX, as well as airports in Ontario and Van Nuys, based internal strategic planning for LA/Palmdale Regional Airport on the distribution of passenger traffic among regional airports developed by the region’s MPO. Los Angeles World Airports also for a time pursued a decentralization strategy similar to that suggested in the RASP—attempting to develop LA/Palmdale Regional Airport—although the airport sponsor focused on serving local passengers, rather than passengers that might travel to the airport from elsewhere in the region. Finally, Los Angeles World Airports is supporting the development of a high-speed rail line that would divert passenger traffic by either improving access to alternate regional airports or carrying passengers on busy regional corridors, which was also included in the RASP. Airport officials at San Diego International Airport and SFO—both in regions with significantly congested regions currently developing RASPs— anticipate using the RASPs for their airport-level planning. The San Diego RASP is being developed by the airport sponsor itself, and future airport plans at San Diego International are expected to reflect findings from the RASP, according to airport officials, although there is no assurance that the RASP would be considered by other airports in the region. Likewise, in San Francisco, SFO airport officials are supporting ongoing regional airport planning and stated that they expected to consider findings included in the RASP when developing airport plans. While not included in our in-depth analysis of selected regions, state department of transportation officials in Florida explained that RASPs in the state are closely tied to airport decision making, given the link between these plans—which are developed as part of the state’s airport planning process—and the state’s airport improvement program. Airport capital plans reflect state priorities to be eligible for these state funds. RASPs are developed by committees made up of airport sponsors and MPOs. The state department of transportation facilitates and supports these committees, and the resulting regional plans are incorporated into the state’s aviation system plan, thus becoming state priorities. The priorities reflected in the RASPs, however, are not linked to the decision making done by FAA for AIP funding, according to a state official. In those areas that have not developed RASPs, regional airport planning has contributed to some decision making. In the New York region, for example, FAA led efforts to carry out a regional demand study looking at current traffic at regional airports—both the primary and smaller regional airports—as well as surveying passengers to determine where they came from in the region and if alternate airports might be closer than the three congested primary airports. The study also identified the development needs for regional airports. Based in part on the study’s forecasts, the Port Authority acquired Stewart International Airport north of the city in 2007. The newly acquired Stewart International Airport is seen by the Port Authority to have the potential to ease some congestion pressure on other Port Authority airports—without removing passengers from the Port Authority system—if airlines can be attracted to provide service to serve the local population. By contrast, the Port Authority has not included the other potential alternate airports identified in the demand study— Westchester County and Long Island MacArthur Airport—in regional airport planning currently being undertaken by the Regional Plan Association, which is a nonprofit, civic group that has received funding from the Port Authority to develop an airport system plan. These alternate airports are outside the Port Authority system, and Regional Plan Association officials stated that non-Port Authority airports would be invited to participate in finalizing the regional plan if draft recommendations included them. Figure 5 illustrates, as of 2005, the service areas for the main airports in the New York-New Jersey region and shows the location of six other airports in the region, including Stewart International. (BDL) (BDL) (BDL) (SWF) (SWF) (SWF) (HPN) (HPN) (HPN) Newark Liberty Newark Liberty Newark Liberty (LGA) (LGA) (LGA) Lon Iland Lon Iland Lon Iland (EWR) (EWR) (EWR) (ISP) (ISP) (ISP) John F. Kennedy John F. Kennedy John F. Kennedy (JFK) (JFK) (JFK) Philadelphia Philadelphia Philadelphia International International International (TTN) (TTN) (TTN) (PHL) (PHL) (PHL) Atlantic City Atlantic City Atlantic City International International International (ACY) (ACY) (ACY) FAA officials and others pointed to the regional airport planning in the Boston region as being a role model effort. Officials with Massport, the sponsor of Logan International Airport (Logan) in Boston, and planning officials began to seek regional solutions in the 1990s after it was determined that Logan, the region’s primary commercial facility, would be unable to fully accommodate growing regional demand and that there were no options to construct a new primary airport. Regional airport planning has included a series of demand studies and a RASP that concentrated on finding and implementing a mix of solutions. The resulting plans recommend improvements at Logan; the increased use of underutilized airports in the region and improvements at these airports; as well as the expanded use of other modes of travel, notably high-speed rail in the Northeast Corridor. FAA played an important role in the Boston region by supporting regional airport planning and incorporating the regional approach into its decision making for airport capital improvement projects. The regional airport planning in the Boston region was led by local airports and facilitated by the FAA regional office, which provided funding for studies as well as taking a leading role in the most recent demand study and the development of the 2006 RASP. FAA’s involvement in the regional airport planning was credited to the interest of the agency’s regional staff. Massport officials explained that regional airports would have been reluctant to participate in a project headed by Massport, and the involvement of the Massachusetts Aeronautics Division and FAA helped convene stakeholders and get people to participate in the process. FAA also worked with regional airports to develop capital plans to identify needed airport improvements that were consistent with the RASP, according to regional FAA and Massport officials. The Boston region does not have an airport among those forecast to be significantly congested in FAA’s FACT 2 report, assuming planned improvements occur, and FAA and Massport officials give some credit to the implementation of regional airport planning in reducing congestion. Officials at Massport point to improvements at Logan—which included a new runway, new taxiways, reductions in minimum spacing between aircraft, and issuance of peak period pricing mechanisms—as well as to the regional airport planning as being important to addressing the capacity challenges that faced the airport. Furthermore, the region was significantly less congested following the September 11, 2001 (9/11), terrorist attacks, with passenger levels at Logan dropping 18 percent from 2000 to 2002, although this traffic has largely returned. Following the 9/11 attacks, there was an increase in passengers using Amtrak to travel to New York City, demonstrating the potential for high-speed rail to complement air service and potentially reduce airport congestion. The realization of the goals of regional airport planning in the Boston region was greatly aided by the decision of Southwest Airlines to initiate service at T.F. Green Airport near Providence, Rhode Island, in 1996, and at Manchester-Boston Regional Airport in Manchester, New Hampshire, in 1998, and airline officials pointed to regional airport planning as a factor facilitating these decisions. Southwest officials stated that the regional demand study pointed to potential demand near these airports and helped to pique their interest, in addition to their own analysis, in exploring expanded service in the New England region. Furthermore, airport improvements at T.F. Green Airport and Manchester-Boston Regional Airport allowed for the expansion. The airline debuted service at one gate at T.F. Green. Due to the strong demand, the airline requested that the airport construct a terminal expansion, which allowed Southwest to expand to four gates over the next couple of years. According to airline officials, both of these alternate regional airports met the airline’s expectations. The MPOs that conduct regional airport planning have no authority over which airport improvement projects are priorities in their regions and, as a result, the RASPs they produce have little direct influence over airport capital investment and other decisions. Because MPOs do have authority over surface transportation projects—only projects prioritized by MPOs are eligible to receive federal funding from the Federal Transportation Administration (FTA) and the Federal Highway Administration (FHWA)— MPOs can directly influence surface projects that affect airport access, but cannot directly affect the capacity of these airports. None of the airports we met with during the course of our review are required to consider or incorporate the recommendations of RASPs into their ALPs or airport master plans. In most of the 6 regions that have developed or are developing RASPs, airport officials—such as those at LAX and SFO— stated that they would consider the region’s perspective in an informal fashion, even though recommendations included in RASPs are not binding. Other airports we interviewed were more guarded about their consideration of regional airport planning conducted by MPOs. Airport officials at John Wayne Airport in the Los Angeles region stated that the region’s RASP is not a primary driver of airport decision making, in part because regional planning priorities are likely to differ from those of the airport, particularly regarding mitigation strategies for surrounding communities. Airport officials at Philadelphia International stated that the airport does its own planning without input from regional planners, although the airport is active in the development of regional airport plans. As a result, regional priorities may not be reflected in the decision- making documents that guide capital improvements at airports. Ongoing research being developed for the ACRP similarly notes that while regional airport planning could fill the gap between airport- and national-level planning, most regional airport planning conducted to date has not been influential due in part to the fact that airport sponsors retain authority over planning and development decisions. According to FAA, it is also not required to consider MPO-developed RASPs, even when these plans are funded with FAA grants. FAA officials stated that the inclusion or absence of a project in a RASP had little influence whether the agency approved AIP grants for an individual airport project, serving in some cases as a tiebreaker but not guiding project prioritization. FAA considers AIP grants for capital improvements on an airport-by-airport basis, based on national criteria. Airports justify improvement projects individually using forecasts from their own service areas, and the national criteria that FAA uses does not consider how improvements exist in a regional context, except during the environmental review process. As we have previously discussed, FAA regional offices have some latitude in determining which projects to fund, and FAA’s funding and support of regional airport planning itself may vary within the agency and by project. Thus, while FAA guidance and headquarters staff encourage regional airport planning, two MPOs in regions with significantly congested airports have had difficultly in obtaining funding for regional airport planning in recent years. For example, in the Philadelphia region, an MPO official told us they sought funds to assess capacity and demand across the airports in its region with a demand study similar to the ones completed with FAA funding in Boston and New York. FAA officials told us that they rejected the study for Philadelphia because it would have included a significant marketing component—which is ineligible for AIP funding— and it might not be good timing for the MPO to conduct capacity analysis at the same time as the environmental impact statement for proposed improvements at Philadelphia International is under way. An MPO official told us that regional planners hoped to use the results of the study to develop recommendations and prioritize improvement projects in their region—as had been done with the FAA-supported demand study and related RASP in the Boston region. Additionally, FAA officials told us that AIP funding to the MPO had declined in recent years, but that FAA did not view other recent MPO proposals as useful. FAA has not provided funds for regional airport planning in Los Angeles since 2005, although the MPO has developed a RASP in the meantime without FAA funding. According to FAA regional officials, the regional airport planning carried out by the MPO offered impractical solutions—notably, a proposal to construct magnetic levitation (maglev) train lines to regional airports—that were not financially feasible. MPO officials in Los Angeles pointed to other aspects of RASPs developed by the MPO every 4 years, such as the forecasting and consideration of alternate regional airports, as evidence of its value, and expressed frustration that technical support from FAA was difficult to obtain. For MPOs that want to carry out continuous planning, the lack of consistent funding may limit their ability to maintain staff and conduct planning on an ongoing basis. FAA’s guidance on airport system planning points to the importance of continuous planning, but FAA’s AIP funding process is not structured to prioritize it. This is in contrast to the MPO-led surface transportation planning process, which according to FTA and FHWA guidance was developed to ensure continuous planning, among other things. Rather, projects are evaluated on a case-by-case basis for AIP funding, which favors projects with discrete products, although the AIP handbook notes that funding is available for continuous planning, which may include continuing surveillance and coordination of the airport system, periodic plan reevaluation, special studies, and the updating of RASPs. The MPOs in two of the regions with potentially significantly congested airports maintain aviation planning staff to carry out regional airport planning on an ongoing basis. In each of these regions, the MPOs received AIP grant funding from FAA for regional airport planning for a number of years, but this funding has been curtailed in recent years. In Los Angeles, the MPO has received no AIP funding since 2005 and has continued to carry out regional airport planning using its own resources. While it received AIP funding in recent years, the MPO in Philadelphia limited the scope of its regional airport planning to special studies—rather than continuous system planning—according to regional planning officials, as the result of reduced FAA support for continuous system planning. According to ongoing research being developed for the ACRP, these two regions are among a handful of MPOs nationwide that employ aviation specialists—staff that could be involved in the type of monitoring involved in continuous planning. The advisory nature of regional airport planning and its lack of a connection to capital investment decisions are not the only hindrances to regional airport planning and implementation. We also found that a number of competing interests can derail a plan and prevent implementation. When the individual interests of airports, communities, and airlines are not aligned, for example, they can hinder regional airport planning and implementation. Furthermore, the lack of funding for planning can also be a hindrance. Additional hindrances include the following: Airport interests. A major hindrance to regional airport planning and implementation are the differing interests of airports in a region that may conflict with an integrated regional approach. Airport interests may include maximizing revenue generation and protecting markets—including high-value business or long-haul markets. As a consequence, regional airport planning may be more difficult to undertake and implement in locations where airports see themselves to be in direct competition with other airports in their region, particularly if they perceive that such planning may divert traffic or resources to competing airports. Airport officials in Philadelphia told us that they do not want to support federal efforts, including regional airport planning, that could lead to losing or diverting flights from their airport to other airports in the region, for example, because the City of Philadelphia—which owns Philadelphia International—does not want to lose revenue generated at its airport to other airports. In other regions, we found that distrust between some airports has limited the range of solutions considered in RASPs. For example, the MPO and Los Angeles World Airports airport officials told us that other airport sponsors in the region—including those for airports in Long Beach, Burbank, and Orange County—have viewed regional airport planning suspiciously, notably the planning undertaken by the now- defunct Southern California Regional Airport Authority. This authority theoretically had the ability to force airports to accept more traffic. Regional airport planning carried out by the MPO, however, does not include such authority, and since 2001 RASPs have been developed that respect the physical constraints and legal restrictions at individual airports in the region. Community interests. Some local community interests, such as those focused on noise or environmental concerns, may impede or limit regional airport planning and implementation. As the result of community pressure, two airports in the Los Angeles region—John Wayne Airport in Orange County and Long Beach Airport—have legal agreements and requirements, respectively, that allow them to limit the capacity of their facilities, for example. MPO officials in the region told us that airport sponsors at these airports primarily participated in regional airport planning to ensure that existing limits on capacity or expansion were respected. These airports are forecast to need additional capacity by 2025, given that they are not expected to meet passenger demand. Other airports in the region are also working to respond to community pressure to limit growth or operations, and such agreements may further restrict the available airport capacity under certain conditions in the region. For example, the airport sponsor of LAX has agreed to limit the number of operations at the airport in response to community concerns about noise, air quality, and the quality of life in surrounding communities. In addition, the airport sponsor at Bob Hope Airport in Burbank applied to FAA to make a voluntary nighttime curfew permanent, which had the potential to put pressure on nearby airports, such as LAX, or airports in Ontario and Van Nuys. While FAA denied the application, even voluntary agreements of this type reduce the regional options for meeting passenger demand for air travel. Airline interests. Airlines act independently of both airports and communities, and their independence may complicate efforts to plan regionally. Airlines make decisions about which airports to serve and the level of services they will offer according to their business and network plans, and such decisions may not align with airport and MPO plans. Most notably, in a congested region, planning officials might suggest that traffic migrate to lesser-used alternate airports, as they have in Los Angeles. However, this suggestion may conflict with the business plans of airlines that already serve primary airports in a region. Such airlines generally want to focus their traffic in a city at one major airport, both for cost and revenue reasons. In addition, while MPOs may want to develop capacity in the system, this development may not align with the objectives of airlines. Individual airlines may prefer to sell limited capacity at a premium price or limit the ability of other airlines to provide competing service. FAA guidance on airport system planning points to the importance of understanding airline business models when suggesting the use of alternate regional airports. Regional planning and airport officials in several of the regions we visited noted that they concentrated on attracting new entrants to the market or airlines whose business plans included serving alternate airports—primarily low-cost carriers—for service at these airports. The use of demand management strategies that provide incentives for airlines to serve alternate regional airports—or a disincentive to serving congested, primary airports—could serve to align the interests of airlines and airports or regional planners as well, according to some airport officials. Airport sponsors and MPOs in our selected regions indicated that they had little influence over airline service levels and locations, which made it difficult to align divergent and sometimes competing interests. Regional planners with whom we met also indicated that they found it difficult to engage airlines in their regional airport planning. For example, MPO officials in Philadelphia reported that airline representatives had attended only one planning meeting. Likewise, in San Diego, an airline representative was included on the advisory committee, but airlines were not participating in regional planning. According to airline representatives, airlines are typically not involved in regional planning, although they may participate in airport-level planning, given their interest in controlling costs. An additional complicating factor is a difference in airport or regional planning and airline planning. Whereas airports use 5- to 10-year forecasts to develop master plans for capacity investments and RASPs may be updated every 2 to 5 years, airlines’ assets are largely mobile and can move from one market to another with relative ease. Legal restrictions. Current airport revenue rules generally do not allow airports to price their services regionally; therefore, using pricing to even supply and demand among various airports is not possible. Airfield revenues may not exceed the aggregate costs to the airport sponsor of providing airfield services and airfield assets currently in use, with certain exceptions. The fees that airports typically charge airlines to operate at individual airports—including rental charges and landing fees—are based on the historical costs of operating the facility according to FAA. Improving alternate airports can make them more expensive, since the costs for such improvements become part of the rate base charged to airlines. For example, in the Los Angeles region, fees for airlines at the more-congested LAX are less than at less-congested airports in the region, such as Ontario International, in part due to previous improvements at the smaller airport. Furthermore, airport-airline lease agreements, which, according to officials, can prohibit some airport sponsors from transferring funds from one airport to another, even if they have the same sponsor, also can limit the options available for regional airport planning. As a result, it may be challenging to adjust these fees in a regional context to provide financial incentives to airlines to serve less-congested airports, if these airports have higher operating costs. From our in-depth analysis, we identified a number of factors that aided regions in the development and implementation of regional airport planning. In general, we found that when stakeholders were supportive of regional airport planning, the plans resulting from these efforts were more likely to be used. More specifically, the factors that helped align these various stakeholders include the following: Legal considerations. Legal considerations served to facilitate planning in two of our selected regions. After residents of San Diego County rejected a proposal to develop a second airport, a law was passed that required the county’s airport authority to develop a RASP by June 30, 2011. The law requires the airport authority—which operates San Diego International—to prepare and adopt a plan that identifies workable strategies to improve the performance of the regional airport system. In the San Francisco region, a state agency, the Bay Conservation and Development Commission, controls the permitting process for development within 100 feet of the shoreline of San Francisco Bay. Both SFO and Oakland International airports sit on land adjacent to the bay and therefore are subject to the commission’s review and permitting process, depending upon the type of development projects these airports propose. The commission has stated that it would deny projects—including the construction of new runways—that would affect the bay, unless the airports exhaust all reasonable alternatives to providing capacity. In practice, the region’s RASP development process has become the venue to explore such alternatives. Constraints on infrastructure. A number of constraints to airport construction—geographic, environmental, and political—spur regional airport planning. In Boston, for example, Logan is largely locked into its existing footprint, given its waterfront location and surrounding community. Officials in several of our selected regions mentioned similar constraints as reasons for participating in regional airport planning. In San Francisco, filling the bay to build capacity would be extremely costly and may be unlikely, given environmental concerns. Likewise, terrain surrounding San Diego International and the airport’s small footprint limit expansion opportunities. Each of these regions is using regional airport planning to help identify additional options for providing transportation capacity. MPO and FAA interest and involvement. Regional airport planning was more likely to occur when a MPO or FAA took an active interest in advancing regional airport planning. In several of the regions we visited, for example, MPOs had aviation planners that carried out system planning. Such planners in Philadelphia have engaged in a variety of regional airport planning, including the development of a RASP and prioritizing airport projects for state funding. MPO officials are also active in Los Angeles at the Southern California Association of Governments. Over the course of many years, this MPO has developed several RASPs, and FAA has provided funding for some of this planning. The MPO also has created and maintained a sophisticated modeling tool, allowing it to do airport choice modeling for the entire region. Ongoing efforts to create and update RASPs under way in San Diego and San Francisco are being undertaken jointly by MPO and airport officials. While some FAA and airport officials questioned the regional airport planning expertise of MPOs, MPOs regularly prepare surface transportation plans and this experience may aid them in developing RASPs. MPOs are required to develop long-range (20 year) transportation plans and short-range (4 year) Transportation Improvement Programs (TIP) that identify strategies for operating, managing, enhancing, maintaining, and financing a metropolitan area’s transportation system, among other things, and the elements suggested for RASPs are similar to those included in these plans. For example, the surface transportation plans prepared by MPOs monitor existing conditions, carry out forecasting, and identify current and future transportation needs and potential improvement strategies. FAA guidance for airport system planning also includes an inventory of the current aviation system, forecasting, an identification of air transportation needs, and the consideration of alternative airport systems. In a survey conducted of MPOs nationwide for a prior GAO report, nearly 19 percent of MPOs reported that they engaged in regional airport planning—sometimes as a result of state requirements. We found that 17 (41 percent) of the 41 largest MPOs that responded to the survey—those with populations with over 1 million people—indicated that they engaged in regional airport planning. Of these 41 MPOs, 39 have a large- or medium-hub airport within their jurisdictions. Airports noted that outside groups, such as MPOs; nonprofit groups; or FAA can be useful in establishing regional airport planning since they can mitigate some of the suspicion that might be present if airports, particularly dominant ones, lead the planning. According to ongoing research being developed for the ACRP, MPOs can offer airport managers truly regional perspectives on planning, data, and analyses on travel behavior and demand in a geographically broad area and a neutral “table” at which airport managers and other key stakeholders can sit to work through coordination options and opportunities. Establishing a neutral table was especially helpful in the Boston region where FAA took an active role in helping to formulate a RASP and then to implement the recommendations. FAA regional officials helped develop the region’s 2006 RASP by facilitating meetings among potentially reluctant stakeholders and leading an assessment of regional demand, among other tasks. FAA regional office then worked actively with airports in the region to integrate RASP recommendations into their capital plans and reviewed these plans against the RASP when making grant decisions. Political benefit. In several of the regions we visited, airports supported regional airport planning to obtain political acceptance for airport improvement projects. Given sensitive environmental considerations, SFO and Regional Airport Planning Committee officials told us that they worked together on the RASP because any significant capital improvements would need the support of the regional body. Even when regional airport planning is undertaken without the leadership of a MPO, there can be political benefits. In the New York region, the Port Authority is funding a project by the Regional Plan Association to look at ways to build capacity within the Port Authority system. As part of this effort, Regional Plan Association officials told us they planned to poll the region’s residents before and after their planning process regarding delay and the public’s support for potential solutions. They anticipate that polling demonstrating greater public awareness of the problems posed by delays will build support for potential solutions, including less-popular options such as runway construction or other improvements at the three major airports in the region. Airport benefit. When airport objectives complement each other— whether to increase, decrease, or maintain current flight levels—regional airport planning recommendations may be reflected in airport improvement decisions. In regions where a capacity-constrained primary airport wants to specialize in particular types of flights or service, for example, other airports in the region may benefit if they are interested in expanding other types of flights or services. Furthermore, we found that if a region’s primary airport or airports are engaged in regional airport planning, their involvement may engender momentum for planning and result in additional financial resources or other support. In Boston, which is a region generally seen as successful at regional airport planning, FAA officials told us that their efforts to shift traffic away from Logan was aided by Massport’s interest in reducing the number of smaller feeder flights that were consuming an increasing amount of the airport’s runway capacity. Its interest in making capacity available for international and long-haul flights rather than short-haul flights coincided with the interests of regional airports in New Hampshire and Rhode Island that wanted to expand service. Officials at SFO also expressed enthusiasm for renewed regional airport planning in their region. An airport official told us that such an effort might allow SFO to focus on a more-targeted segment of the aviation market, notably long-haul and international flights, while allowing alternate airports to expand shorter-haul domestic flights. SFO, together with the region’s other primary airports, has provided financial support to the regional planning process. In each of these cases, the region’s primary airport or airports took an active role in regional airport planning, by acting as participants as well as by contributing financial resources to sustain the efforts. The national airspace system is plagued by congestion and delay, with nearly one-in-four arriving flights delayed at major airports, even though a majority of the nation’s airports still have adequate capacity. FAA and others forecast that more airports and regions will be congested in the future, even if planned infrastructure and technological improvements occur. However, many regions that contain congested airports also have alternate airports that may be able to provide some congestion relief as well as other options, including using other modes of transportation such as high-speed rail. Regional airport planning can identify solutions to help relieve aviation congestion—that airport-level planning cannot. RASPs should include the range of elements identified by FAA for airport system planning to help establish a viable system of airports. While FAA reviews RASPs and other regional system plans to determine if they are eligible for FAA funding, in those cases where RASPs have been completed, FAA does not necessarily review the plans for conformance with FAA guidance or standards. Without a review process, FAA may not have confidence that RASPs are of a sufficient quality to guide decision making or to ensure that they are integrated with local airport-level plans, state airport system plans, and the NPIAS. Nor is there an incentive for FAA to work with regions to help ensure that RASPs meet certain standards, both in terms of content and quality. Except in the Boston region, the recommendations made in RASPs that we reviewed have not been systematically integrated into airport capital plans that currently guide airport decision making and FAA funding. Rather, both airport sponsors and FAA can choose to ignore RASPs, or to use them selectively, even though the federal government has contributed millions of dollars for their development. Congress, however, in creating the current AIP in 1982 indicated that FAA may give priority to projects that are consistent with integrated airport system plans, such as RASPs. If RASPs are ignored, the time, effort, and resources that MPOs, airports, and other regional bodies expend on these efforts—as well as FAA’s grant support—are not filling the gap between airport- and national-level planning efforts or ensuring that funding is used most efficiently to manage capacity within regions with large- or medium-hub airports. To ensure that federal AIP funds are employed to their maximum benefit and to improve the level of regional- and airport-level coordination, we recommend that the Secretary of Transportation direct the Administrator of FAA to take the following two actions: 1. Develop an FAA review process for regional airport system plans to ensure that they meet FAA standards and airport system planning guidance as well as provide technical support for regional planners undertaking such planning. 2. Use its existing statutory authority to give priority to funding airport projects that are consistent with RASPs. We provided a draft of this report to DOT for its review and comment. DOT provided technical comments in an e-mail message on December 11, 2009, which we incorporated into this report as appropriate. In reviewing the draft’s second recommendation to require that the RASPs are integrated with airport-level plans so that they are consistent and tied to FAA funding decisions, DOT officials indicated that they did not believe they had the authority to require airports to incorporate RASP recommendations unless airports concurred. As a result, to create incentives for airports to work with MPOs and other regional organizations, we modified the second recommendation for FAA to use its existing statutory authority to give airport projects that are consistent with RASPs greater priority for AIP funding. DOT generally agreed to consider our recommendations. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Transportation and the Acting Administrator of the Federal Aviation Administration. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. To identify regions with potentially congested airports, we used the Federal Aviation Administration’s (FAA) 2007 report entitled Capacity Needs in the National Airspace System, 2007–2025 (FACT 2). Using both demand and capacity forecasts, this report identifies airports that it predicts will face significant capacity constraints by 2015 and 2025. To obtain clarification on the methodology employed, we met with officials at both FAA and The MITRE Corporation to discuss the study’s design and findings and reviewed both published reports and unpublished work— including the scores received by airports in the four assessments used to measure demand and capacity—supporting the FACT 2 study. Appendix II provides more information about the methodology used in the FACT 2 report and its implications. To evaluate the challenges facing regions with potentially congested airports, the extent of regional airport planning being undertaken, and the factors that have aided or hindered planning and the implementation of regional airport plans, we carried out an in-depth analysis of selected regions. We identified regions for this analysis using the following four criteria: (1) existing and predicted aviation congestion based on FAA’s FACT 2 study, (2) whether regions had sought funding from FAA to carry out regional airport planning and the extent of the funding provided by FAA, (3) whether regional airport planning had occurred, and (4) whether regions were served by a single airport or multiple airports and the extent to which multiple airports in a region were governed by the same sponsor. Our assessment of regions with congested airports included Los Angeles, New York, Philadelphia, San Diego, and San Francisco. We also assessed regional airport planning activities in Boston, although this region is not among those with airports that FACT 2 forecast to be significantly capacity constrained. FAA officials and experts pointed to the Boston region as having undertaken successful regional airport planning. Each of the regions we selected received funding from FAA for regional airport planning from 1999 to 2008, and regional airport planning has been undertaken in each region. Three of the regions are served by multiple airports—sometimes under the same sponsor—while Philadelphia and San Diego are in regions with one major airport. For each of the regions we selected, we reviewed regional airport planning documents and interviewed officials at FAA airport district offices, airports officials or sponsors, state aviation departments, and metropolitan planning organizations (MPO). These interviews addressed the following topics: The nature of the regional airport system, including challenges involving capacity constraints or congestion and local constraints. Participants or stakeholders in the regional airport planning process. The extent that regional airport plans are used by airports, MPOs, states, and others to guide airport decision making and FAA airport funding decisions. The inclusion of intermodal access and other ground transportation in regional airport plans. Factors that aid or hinder regional airport planning or the implementation of regional airport plans. We interviewed FAA officials in the Office of Airport Planning and Programming to collect information about the types of plans involved in aviation planning; the nature and extent of regional airport planning in congested regions; the history of such regional planning; the roles of various stakeholders, including FAA; and the outcomes associated with regional airport planning to date. We also reviewed FAA’s advisory circular on the airport system planning process and related documents from FAA to summarize the guidance that FAA provides to airport system planners, including those in metropolitan areas. To analyze FAA funding for regional airport planning, we obtained grant data from FAA for metropolitan system planning in the agency’s airport improvement program (AIP) from fiscal years 1999 to 2008. These grants were awarded primarily to MPOs, but one state and several airport sponsors also received grants. To assess the reliability of these data, we reviewed the quality control procedures applied to the data by the Department of Transportation and subsequently determined that the data were sufficiently reliable for our purposes. To gain an understanding of the congested aviation regions and the potential impact of regional airport planning, we spoke with industry experts, including those in academia; airline industry representatives; and regional planners. We interviewed academics at the Massachusetts Institute of Technology and the University of California at Berkeley regarding work that they had undertaken on regional airport systems. We discussed airport system planning and congestion with the Air Transport Association, the National Association of State Aviation Officials, the ENO Transportation Foundation, and Airport Councils International. To discuss the results of regional airport planning in the Boston region, we interviewed officials with Southwest Airlines. We met with government officials and industry experts at a Transportation Research Board conference on aviation system planning. We also reviewed various reports and studies, including research on airport systems, congested regions, intermodal issues, and planning and on the use of alternative airports published by authors at the Massachusetts Institute of Technology, the University of California at Berkeley, GRA Incorporated, and the Airport Cooperative Research Program (ACRP) of the Transportation Research Board, among others. Finally, we reviewed previous GAO reports, including our prior work on aviation infrastructure, the Next Generation Air Transportation System (NextGen) program, MPOs, and high-speed rail. We conducted this performance audit from September 2008 to December 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The purpose of the FAA FACT 2 study is to analyze the extent to which airports and metropolitan areas in the United States will face aviation capacity constraints in the future. The study developed forecasts of expected operations (takeoffs and landings), demand, and the capacity to handle traffic at 56 airports and certain associated metropolitan areas. By comparing, for each of three time frames (2007, 2015, and 2025) an airport’s expected demand with its projected capacity, the study then measured, in four different ways, the extent to which each airport may experience congestion and delay. The study used specific thresholds to designate whether an airport would be capacity constrained according to each of the four capacity assessments. To be so designated, an airport must be found to be capacity constrained across all four assessments for a given time frame. According to FAA and MITRE officials with whom we spoke, the study was designed to identify which airports would be the most capacity constrained. Because of the focus of the study, some airports that are also likely to face some degree of capacity problems are not among those identified as capacity challenged in the study. Using demand and capacity forecasts—each of which is evaluated in two different ways—the FACT 2 study produced four assessments of the extent of capacity challenges at each airport in 2015 and 2025. The FACT 2 study used two different forecasts of future demand, both of which use economic, demographic, and airline industry information (such as expected fares and the degree of competition) to assess the expected level of future aviations operations at each airport. Both forecasts are also generally “unconstrained,” meaning they predict the extent to which demand will grow at an airport regardless of whether that airport would actually be able to handle all of the traffic. Key aspects of the forecasts are summarized as follows: Terminal Area Forecasts (TAF): Produced by FAA each year, TAF forecasts project expected operations demand on an airport-by-airport basis, with separate forecasts for air carrier, commuter and air taxi, military, and general aviation operations. Future Air Traffic Estimator (FATE) forecasts: Produced by MITRE, FATE forecasts project origin to destination traffic between metropolitan areas within the United States. This model then analyses how flights are likely to be scheduled by airlines to meet that demand, based on projections about which airports within a city, flight routes, and types of aircraft will be used for each flight segment. The results are then restated on an airport-by-airport operations basis, and supplemented by the number of projected international and general aviation operations at each airport. FACT 2 used two methods to evaluate airport capacity which then fed into the following two models of capacity constraint: the annual service volume (ASV) and national airspace system (NAS) modeling. Both models assessed existing capacity and for the 2015 and 2025 forecasts took into account planned additions or improvements to runways, technologies, and air traffic procedures. For the 35 Operational Evolution Partnership (OEP) airports and for Oakland International Airport, the 2025 analysis also took into account some elements of the expected improvements offered by NextGen implementation. ASV: The ASV is the level of capacity—expressed in the number of operations during a year—at each airport that, if fully utilized, would be expected to be associated with a given level of average delay. A FAA model established the ASV level by examining existing data on the relationship between the level of operations and extent of delay across a set of runway configurations in varied weather conditions at each airport. The model took into account the expected capacity-enhancing improvements and simulated, based on past experience, an ASV level that would be associated with a 7-minute average queuing delay at each airport. NAS–Wide Modeling: While the ASV method establishes the level of demand that would be associated with an average level of delay, NAS modeling estimates the extent of delay that will result from a specific level of traffic, given an amount of capacity. The NAS modeling begins with “benchmark” airport capacity measures, which were established for most of the FACT 2 airports in an earlier study based on the most commonly used airfield configuration in three weather conditions, information on weight classes of fleet at the airport, and other operational factors. Future capacities were then estimated based on any planned airport improvements at the airport and in ATC procedures and on NextGen improvements. The key findings of the FACT 2 study are that assuming all capacity improvements—including those associated with NextGen for 2025—are taken into account, 6 airports will be capacity constrained in 2015 and 14 (an additional 8) will be capacity constrained in 2025. For an airport to be designated as capacity constrained in 1 of the study’s forecast years, the airport had to be designated as capacity constrained in each of the following four assessments: ASV with TAF forecasts: The ASV was compared with the TAF demand forecasts to obtain a ratio of forecasted demand to ASV. A threshold at 80 percent was used in designating airports as capacity constrained, meaning that forecasted demand was 80 percent or higher than the ASV. ASV with FATE forecasts: The ASV was also compared with the FATE demand forecasts to obtain a ratio of forecasted demand to ASV. A threshold at 80 percent was again used in designating airports as capacity constrained, meaning that forecasted demand was 80 percent or higher than the ASV. For example, for the Dallas–Forth Worth International (DFW) airport, the 2007 ASV ratio was 0.78 with the TAF demand forecast and 0.81 with the FATE forecast, indicating that the airport was just edging toward having a capacity problem at that time, according to the ASV assessments. For the 2025 forecasts at DFW, the ratios are 1.09 and 1.15 under TAF and FATE, respectively, indicating that according to the ASV assessments, DFW will become substantially more delayed by 2025. NAS with TAF forecasts: This NAS assessment uses a “network queuing” model that simulates how traffic flows across the NAS, given the level of demand on routes and the extent of capacity at airports. This analysis measures the following for each airport: (1) average scheduled arrival delay, (2) arrival queue delay, (3) percentage of scheduled arrival delay caused by local conditions, and (4) departure queue delay. An advantage of the NAS method is that by analyzing the relationship between operations and capacity across the network, rather than on an airport-by-airport basis, the model can take into account how circumstances at one airport influence delay experienced at other airports. Moreover, this analysis enables the contributory causes of measured delay at any given airport to be identified; that is, it distinguishes among delay caused by conditions at the given airport, at other airports, and in the airspace. Using this model, two different triggers can cause an airport to be designated as capacity constrained. First, the capacity-constrained designation is triggered if the airport’s scheduled arrival delay is at least 12 minutes and, if in either weather condition examined, either (1) the arrival queue delay exceeds 12 minutes or (2) local conditions causes more than 50 percent of scheduled arrival delay. Using the secondary factors to supplement the scheduled arrival delay criteria allows capacity-constrained airports to be limited to those that experience delay caused by local factors. Second, an airport can also be designated as capacity constrained if the airport’s departure queue delay—which is considered to be fully caused by local factors—is at least 12 minutes. NAS with FATE forecasts: The second NAS assessment uses the NAS- wide modeling approach with the FATE demand forecasts. Instead of rerunning the NAS model with FATE forecasts, outputs from the NAS/TAF runs are used and the differences between the FATE demand forecasts and the TAF forecast are examined to calibrate how model outputs would likely have been different under FATE demand forecasts. This assessment measures only average scheduled arrival delay. Under this model, an airport is designated as capacity constrained if the airport’s average scheduled arrival delay is at least 12 minutes. In addition to identifying airports that would be capacity constrained in the future, the FACT 2 study also identified metropolitan areas that are likely to have significant aviation capacity shortfalls. The study looked at Metropolitan Statistical Areas—geographic areas defined by the Office of Management and Budget—or combinations of such areas in the case of some larger metropolitan areas, and analyzed the expected aviation demand and capacity at the relevant airport or airports within those areas. For determining which metropolitan areas should be designated as capacity constrained, FACT 2 only examined those metropolitan areas that either contained a large- or medium-hub airport or at least two small-hub airports that the FACT 2 airport analysis had identified as capacity constrained. A metropolitan area could be designated in FACT 2 as capacity constrained for any of the following three reasons: The metropolitan area contained a large-hub airport that the study deemed capacity constrained and there were no other secondary airports serving the metropolitan area. The metropolitan area contained at least two large hubs, both of which were identified to be capacity constrained. The study conducted an analysis of demand and capacity across the airports in each area. It used projected airport benchmark capacities and, using historical weather conditions, converted these hourly capacities into an annualized average expected capacity level for each airport in each forecast year. For each of the demand forecasts (TAF and FATE), capacity and demand across the relevant airports were summed for each forecast year. If the resulting ratio of metropolitan area demand (for either TAF or FATE) to metropolitan area capacity exceeded 0.8, then the metropolitan area was considered to be capacity constrained in that year. Long-term forecasts of airport demand and capacity, such as those undertaken in FACT 2, naturally face uncertainties. FACT 2 looked almost 20 years into the future. A number of conditions could change over the course of those years and affect the accuracy of the forecasts, including unexpected changes in regional economic growth patterns, demographic movements, new airline industry business models, and the macroeconomy. New industries may also unexpectedly influence business and societal patterns. Since the time that FACT 2 was conducted, macroeconomic conditions have already changed considerably. In particular, because TAF and FATE demand forecasts were conducted prior to the current economic downturn, they are likely considerably higher than demand forecasts would be if they were to be conducted today. The results of the FACT 2 study are not only impacted by forecasting uncertainties, but also the study’s purpose and design. According to officials from FAA and MITRE with whom we spoke, the FACT 2 study was intended to identify airports that will be highly capacity constrained—not just airports that may have some congestion and delay problems. In fact, the published study findings present only a list of airports that were found to be highly capacity constrained and do not report the underlying scores on the four assessments. For our work, we not only examined the published FACT 2 study, but also airports’ scores on the four assessments, and we also met several times with FAA and MITRE officials to gain a further understanding of the model design. We found that the objective of identifying “the worst of the worst” capacity- constrained airports was critical in structuring several elements of the FACT 2 study. These model elements are discussed more fully in the following text: Meeting all four congestion thresholds: The FACT 2 study identified airports as either being congested or not, rather than presenting airports’ degree of capacity constraints along a continuum. Furthermore, it required that an airport be designated as congested on all four assessments to be designated as capacity constrained. These model design elements have two implications. First, there is not a full presentation of the range of capacity constraint—the published report only states whether an airport was determined to be capacity constrained or not. But the underlying scores are of a continuous nature, and some airports were close to the trigger level on some criteria. Moreover, if an airport did not meet the threshold for a designation of a capacity problem on both of the NAS assessments, the ASV assessment may not have been completed, since ASV levels were only reestablished for later years if they were needed for the analysis. In short, the study’s capacity-constrained designation criteria obscure the more continuous nature of the data when designating which airports are on or off the list, and a complete assessment across all four criteria was not completed in all cases. Second, because underlying scores for the assessments are not provided in the final study, the results also do not show how much greater capacity problems are likely to be at some of the airports than at others that do receive a capacity-constrained designation. For example, the findings for the Newark and Philadelphia Airports indicate that congestion and delay will be substantially more problematic in those locations, even when compared with many other of the designated capacity-constrained airports. Seven-minute average delay threshold: The ASV assessments used a 7- minute average delay threshold for determining available airport capacity, rather than the 4-minute delay that, according to FAA and MITRE officials, is more commonly used to measure delay-prone airports within ASV studies. A lower average delay threshold would have resulted in more airports meeting the capacity-constrained threshold, according to the two ASV criteria. Planned improvements: The FACT 2 findings, which are predicated on the assumption that planned improvements will be completed in a timely manner, may understate future capacity problems if improvements fall behind schedule. The two sets of 2025 findings (i.e., with and without improvements) show that the planned improvements are critical for addressing capacity problems at airports. In particular, many more airports would be predicted to have significant capacity challenges under the FACT 2 analysis were it not for the greater capacity offered by the planned improvements. We have previously reported that some airport improvement projects have faced or may face delay in either funding or implementation. If the planned improvements underlying the FACT 2 study face similar delay, then the study may understate future capacity problems. Similarly, we have reported that NextGen improvements face challenges that may affect timely implementation, including some airlines’ reluctance to invest in the necessary equipment, and the need for FAA to validate and certify new technologies and issue certain rules before midterm implementation can occur. In addition, airport officials with whom we spoke expressed concerns that benefits from NextGen technological gains might not be fully realized if FAA does not change air traffic management standards (such as lowering ceiling requirements for certain types of approaches) to match the new technology. FACT 2 acknowledged that more research on these types of air traffic management improvements is required. Unaccounted for constraints: Certain constraints or local considerations that may limit either the growth at individual airports or traffic distribution among airports within a region were not accounted for in the FACT 2 analyses. For example, the study’s unconstrained demand estimates did not take into account legal restrictions at two airports in the Los Angeles area on the number of flights that can operate or the number of passengers that can be accommodated. Thus, FACT 2 may overestimate the operations at these airports and underestimate traffic growth at other airports in the region. FAA officials told us that they did not take these constraints into consideration since FACT 2 was measuring unconstrained demand. Furthermore, they expressed the opinion that the constraints could be changed if there was an interest in doing so locally. Regional officials noted that the current settlement at John Wayne Airport in Orange County expires in 2015. At that point, the county and community may negotiate changes to the current agreement, according to airport officials. This could mean that the FACT 2 demand forecasts for other airports in the region—most notably Los Angeles International Airport (LAX), which came close to being designated as a capacity-constrained airport in 2025—may underestimate future growth. Unaccounted for capacity constraints: The FACT 2 study also did not consider some potential capacity limitations. As noted in the study, when given an opportunity to comment on the FACT 2 methodology, some airport sponsors noted that an airport’s taxiways and terminal gates as well as airspace—rather than runways—can sometimes limit the number of operations that can be handled at an airport. The FACT 2 study, however, focused only on runways as the limiting capacity factor. MITRE officials told us that further analysis of these elements of capacity limitations are being examined currently. Assumed aircraft upgauging: Both demand forecasts, but particularly the FATE forecast, used in FACT 2 assumed some level of upgauging in aircraft size, meaning the average number of seats per aircraft is assumed to rise over the projection time frame. Some aviation experts with whom we spoke, however, do not believe much upgauging will occur in the coming years. If the upgauge assumptions overstate the extent to which seats per aircraft actually rise, the level of congestion in FACT 2 could be understated because more operations than indicated in the demand forecasts would be needed to accommodate the projected passenger base. Nevertheless, FAA officials discussed the analysis that underlies the upgauge modeling for FATE and noted that the FATE forecasted upgauge is driven by past experience in how airlines have chosen to serve routes as demand has risen. Moreover, they pointed out that certain fleet types that are likely to be phased out in the next decade are likely to be replaced with somewhat larger aircraft. According to the FACT 2 report, the analysis includes planned improvements affecting runway capacity for two future planning periods, 2015 and 2025. The planned improvements include the following: New or extended runways: New or extended runways were included as planned improvements. The OEP v8.0 and airport-specific planning documents were used to incorporate the runway improvements in either the 2015 or 2025 planning period. New or revised air-traffic control procedures: If a new or revised air- traffic control procedure was listed in the OEP v8.0 or defined by the FACT 2 analysis as consistent with a NextGen concept, it was modeled as an improvement in this study. NextGen concepts were applied only to the 35 OEP airports and Oakland International and then only in the 2025 planning scenario, given that NextGen is still in the early planning stages. NextGen concepts for en route or oceanic operations or changes to operations on the airport surface were not included. Airspace redesign: Improvements derived from the redesign of the airspace surrounding an airport were included in the 2015 or 2025 scenario on the basis of the best information available. The redesign itself was not performed as part of this analysis. Other assumptions: The FACT 2 analysis assumed existing environmental restrictions that impact runway capacity, such as noise abatement procedures, would continue through the FACT planning periods. Planned taxiway, terminal, or ground access improvements were not included in this analysis because they were outside the scope of the models used. FAA has provided over $34 million in funding to metropolitan regions or others carrying out metropolitan system planning in fiscal years 1999 to 2008. (See table 6.) These grant funds went to a range of efforts, including developing or updating regional airport system plans (RASP). The majority of these projects were sponsored by local MPOs or other regional planning bodies, although the state of Virginia also received a grant. Funding was also provided to several airports sponsors, including the Port Authority of New York and New Jersey; Clark County in Las Vegas; the Palm Beach County Board of Commissioners in South Florida; the Louisiana Airport Authority in the New Orleans region; and the San Diego County Regional Airport Authority, which operates San Diego International Airport. In a survey conducted of 381 MPOs across the country for a prior report, we found that fewer than 20 percent of the 324 MPOs responding indicated they had responsibility for conducting all or a portion of a region’s aviation planning. Among the larger MPOs responding to a question about their involvement in aviation planning—41 of the 42 planning organizations serving areas with populations greater than 1 million—17 engaged in aviation planning activities, accounting for 41 percent of these MPOs. Ten MPOs indicated that they were required by state law to engage in regional aviation planning, 2 of which had populations over 1 million. (See table 7.) There are three commercial service airports operated by separate sponsors in the Boston region. Boston Logan International is a large-hub airport, and in 2008, 73 percent of flights to this facility arrived on time. A medium-hub airport, T.F. Green, near Providence, Rhode Island, and a small-hub airport, Manchester-Boston Regional in Manchester, New Hampshire, also provide commercial service to the region’s residents. FAA’s FACT 2 report did not forecast that any of the airports in the Boston region would become significantly capacity constrained by 2025, assuming planned improvements occur at Boston Logan and T.F. Green. FAA officials in New England have taken an active role in trying to assist the region’s airports in planning for future capacity needs. Officials at Massport, which operates Boston Logan, told us that they realized that the airport would not be able to meet the region’s capacity needs. After an attempt to develop a second major airport in Massachusetts failed, they worked with FAA and other airports in the region to decentralize the region’s air traffic. This allowed Boston Logan an opportunity to specialize in international and long-haul routes over short-haul trips. Prior to the arrival of Southwest Airlines, regional demand studies demonstrated that there were markets that could be served from Boston’s alternate airports. Southwest Airlines officials told us that the demand forecasts piqued their interest in the alternate airports in the region, and that the airline has been pleased with how customers responded to its entry into Boston’s alternate airports. Prior to the emergence of T.F. Green and Manchester-Boston Regional, many residents drove from areas near these airports to travel from Boston Logan. Expanded service options have allowed some residents of the region to be served closer to where their trips originate. Los Angeles World Airports operates two commercial-service airports in the Los Angeles region: LAX is a large-hub airport, and Ontario International is a medium-hub airport. In 2008, 77 percent of flights to LAX arrived on time. There are two other medium-hub airports in the region operated by separate sponsors—John Wayne Airport in Orange County and Bob Hope Airport in Burbank. There is also a small-hub airport in Long Beach and a nonhub airport in Van Nuys, which is owned and operated by Los Angeles World Airports. FACT 2 predicted that both John Wayne and Long Beach airports will become significantly capacity constrained by 2015. The capacity challenges faced by the Los Angeles region are compounded by flight and operations restrictions at several airports in the region. The airports in Orange County and Long Beach have legal agreements or requirements that limit their ability to increase traffic levels and thereby relieve regional congestion. Likewise, the sponsor of Bob Hope Airport has entered into a voluntary agreement that prevents the development of new gates or the expansion of the footprint of the terminal until 2012, according to airport officials. LAX, for its part, has also agreed to a limit on the number of annual passengers at its facility under a settlement agreement with the surrounding community, according to regional planners. Los Angeles World Airports officials told us that while they previously attempted to promote the development of alternate facilities, such as LA/Palmdale Regional, the focus of their agency has shifted back to LAX, given the recent downturn and the backlog of maintenance at this facility. Several of the airports in the region are proposed to also serve as high-speed rail stops, including Ontario International and LA/Palmdale Regional. Such ground access improvements may help these airports play a greater role in delivering capacity for the region in the future. The Port Authority of New York and New Jersey (Port Authority) operates Newark Liberty International (Newark), John F. Kennedy International (JFK), and LaGuardia. These large-hub airports are consistently amongst the most delayed in the nation. In 2008, 62 to 68 percent of the flights to these facilities arrived on time (i.e., within 15 minutes of their scheduled arrival time). Stewart International, an airport 1 1/2 hours of the city by car, was recently acquired by the Port Authority and is a small-hub airport. Long Island Macarthur Airport in Ronkonkoma is a small-hub airport that operates outside of the Port Authority system. FAA’s FACT 2 report reported that LaGuardia and Newark were already significantly capacity constrained in 2007, and that JFK would become so in 2025. The Port Authority is an intermodal organization that is exempt from some of the revenue-sharing prohibitions affecting other regions. Airports in the Port Authority system are part of a larger portfolio of transportation assets operated by the Port Authority, such as major bridges and tunnels. According to the Port Authority, because it was grandfathered under federal law prohibiting the use of airport revenues off airport property, the Port Authority is able to cross-subsidize transportation modes. The airports in the Port Authority’s system provide some of the revenue for other modes that operate at a loss, according to Port Authority officials. The region recently completed a regional air service demand study, and Port Authority officials told us that the forecasts developed for the study were essential for demonstrating the benefits of acquiring the lease for Stewart International. Port Authority officials told us that while they expected the facility to generate revenue eventually, it is now operating at a loss. At the request of FAA, the Port Authority is presently preparing updates to the airport layout plans for airports in its system. FAA officials told us that the last airport master plans the Port Authority prepared date back to 1970. According to Port Authority officials, planning for the airports happens in an ad hoc fashion, given intermodal competition within the agency. The local MPO, the New York Metropolitan Transportation Commission, does not play a role in regional airport planning beyond surface access. A nonprofit, the Regional Plan Association, has recently begun regional airport planning with Port Authority financing, which will focus on the airports under Port Authority sponsorship. Ground access is a significant consideration for the future development of Stewart International, and the Port Authority is cosponsoring a rail study with the New York Metropolitan Transportation Authority to evaluate access improvements to the airport. There is one large-hub airport in the Philadelphia region—Philadelphia International—and one small-hub airport—Atlantic City International—to the southeast in New Jersey. In 2008, 73 percent of flights to Philadelphia International arrived on time. Philadelphia International is owned by the City of Philadelphia, while Atlanta City International is jointly owned by the South Jersey Transportation Authority and FAA. FACT 2 forecast that Philadelphia International would become significantly capacity constrained by 2015. Philadelphia International is presently pursuing a capital enhancement project to add an additional runway and expand another. The project is contentious, particularly with residents of Tinicum Township and Delaware County where environmental impacts, including emissions and noise, might increase. Atlantic City International provides some residents of the region with an alternate to the more congested Philadelphia International. The local MPO, the Delaware Valley Regional Planning Commission, is active in regional airport planning, focusing in recent years on planning for general aviation airports. MPO officials expressed an interest in continuing regional airport planning as well as undertaking a regional demand study similar to the ones completed in the Boston and New York regions. The San Diego region has one large-hub airport, San Diego International. In 2008, 78 percent of flights to this airport arrived on time. FACT 2 forecast that San Diego International would be significantly capacity constrained by 2025. The primary airport in San Diego is run by the San Diego County Regional Airport Authority, which was previously involved in a major site-selection effort to build a new airport for the region. This effort was rejected by voters in 2006, however, and airport officials are now planning under the assumption that San Diego International will be the only major airport in the region. With this in mind, the airport sponsor is considering how it could maximize San Diego International’s capacity within its existing footprint. In addition, a state law passed in 2007 mandates that the airport authority prepare a RASP for the region by June 30, 2011. While the airport authority is working on the airside components of the study, the MPO is working on a multimodal transportation plan. The San Francisco Bay Area has three major airports with different sponsors. San Francisco International (SFO) is a large-hub airport, and in 2008, 69 percent of flights arrived on time. Both Oakland International and Norman Y. Mineta in San Jose are medium-hub airports. FACT 2 forecast that both SFO and Oakland International will be significantly capacity constrained by 2025. SFO and Oakland International are located on land adjacent to San Francisco Bay and face significant obstacles to the construction of new runways as a result. The Regional Airport Planning Committee, which includes the Metropolitan Transportation Commission—the region’s MPO—will play a significant role in identifying potential alternate solutions for the region, and is currently leading efforts to develop a new RASP. This effort is being funded by FAA, the MPO, and airports in the region. SFO officials told us that they have committed themselves to studying nonconstruction ways to relieve congestion, and that they are not averse to having domestic, short-haul traffic shift to Oakland International or Norman Y. Mineta in San Jose or in instituting demand management strategies such as peak pricing to relieve congestion. SFO officials also stated that they are also considering improvements that may come from NextGen and other technological improvements. In addition to the contact named above, Paul Aussendorf (Assistant Director), Amy Abramowitz, Lauren Calhoun, Delwen Jones, Paul Kazemersky, Molly Laster, Monica McCallum, Sara Ann Moessbauer, and Josh Ormond made key contributions to this report. | The Federal Aviation Administration (FAA) predicts that the national airspace system will become increasingly congested over time, imposing costs of delay on passengers and regions. While transforming the current air-traffic control system to the Next Generation Air Transportation System (NextGen) may provide additional en route capacity, many airports will still face constraints at their runways and terminals. In light of these forecasts, the Government Accountability Office (GAO) was asked to evaluate regional airport planning in metropolitan regions with congested airports. GAO (1) identified which airports are currently or will be significantly congested and the potential benefits of regional airport planning, (2) assessed how regions with congested airports use regional airport planning in decision making, and (3) identified factors that hinder or aid in the development and implementation of regional airport plans. GAO reviewed studies; interviewed FAA, airport, and other aviation and transportation officials; and conducted case studies in selected regions. A number of airports are or will be significantly capacity constrained and thus congested within the next 16 years. However, many of them face environmental and other obstacles to developing additional airport capacity. In 2007, FAA identified 14 airports (in 10 metropolitan regions) that will be significantly capacity constrained by 2025, even assuming all currently planned improvements occur (see figure). Planned improvements include airport construction projects and implementation of NextGen technologies. Without these improvements, FAA predicts that 27 airports will be congested. According to the FAA assessment and other studies, regional airport planning may identify additional solutions, such as the increased use of alternate airports or other modes of travel, to help relieve airport congestion. From 1999 through 2008, 9 of the 10 metropolitan regions with airports forecast to be significantly capacity constrained by 2025 have received a total of $20 million in FAA funding for regional airport planning. Of those regions, 6 have developed or will develop regional airport system plans (RASP), which we found largely followed FAA's guidance for airport system planning. The remaining 4 regions have engaged in less comprehensive planning. FAA does not formally review RASPs, and they have been used selectively by FAA and airports in decision making for the planning and funding of individual airport projects. A few airport sponsors have pursued select strategies outlined in plans, while one airport sponsor rejected the RASP for its decision making. Because regional airport planning is advisory, competing interests can derail development and implementation. Metropolitan planning organizations generally develop RASPs but have no authority over airport development. That authority rests with airports, which are not required to incorporate planning recommendations into their capital plans, and with FAA, which makes funding decisions on the basis of national priorities. In addition, airport, community, and airline interests may conflict in a region. For example, Philadelphia International does not support planning efforts that may divert traffic from its airport to alternate regional airports. By contrast, aligned interests and FAA involvement may aid regional planning and implementation, as has occurred in the Boston region. |
In passing the Federal Mine Safety and Health Act of 1977 (the “Mine Act”), Congress gave much of the responsibility for ensuring the safety and health of mine workers to MSHA. Under the stringent requirements of the Mine Act, MSHA must protect the health and safety of miners by thoroughly inspecting each underground coal mine at least four times a year, citing mine operators for violations of the Mine Act, ensuring that hazards are quickly corrected, restricting operations or closing mines for more serious violations, and investigating serious mine accidents. In addition, MSHA must approve the initial plans that mine operators prepare for essential systems that protect mine workers—such as ventilation and roof support systems—and revisions to the plans. To carry out these responsibilities, in 2003, MSHA had approximately 350 inspectors and 210 specialists in eleven district offices. At the end of 2002, the United States had approximately 2,050 coal mines—about 700 underground coal mines and 1,350 surface mines. From 1993 to 2002, the number of underground and surface coal mines in the United States declined and the number of mine workers decreased. Despite this decrease in the number of mines and miners, production remained constant because of the increased use of mechanized mining equipment and more efficient mining techniques. In addition, over the past several decades, coal production has shifted from primarily underground mines to large surface mines, including mines in Wyoming and other areas west of the Mississippi that produce millions of tons of coal annually. Underground coal mines are more dangerous than surface mines for several reasons. One critical factor that contributes to the hazardous working conditions is highly explosive methane gas, which is often produced in large quantities when coal is extracted from underground mines. Additional factors are the geological conditions in many areas of the country that make the roofs of mines unstable, the danger posed by fire in an underground mine, coal and silica dust that can cause silicosis and pneumoconiosis (black lung disease), and the close proximity of unknown areas of abandoned mines, which can lead to flooding of the mine. As shown in figure 1, for the 10-year period from 1993 to 2002, fatality rates for underground coal mines were much higher than those for surface mines. MSHA had extensive procedures and highly qualified staff for approving two of the three types of plans we reviewed—ventilation and roof support plans—and most of these plans were reviewed and approved on a timely basis. However, MSHA headquarters did not adequately monitor completion of required inspections of the ventilation and roof support plans; data maintained by the district offices indicates that some districts were not completing these inspections as required. In addition, MSHA headquarters had not provided clear guidance to the districts on coordinating inspections related to mine plans with quarterly inspections of underground coal mines in order to avoid duplication of effort by district staff. Finally, staffing shortages prevented MSHA from reviewing and approving plans for containing debris produced by the mines on a timely basis. MSHA had extensive procedures for approving ventilation and roof support plans. Mine operators were required to submit their initial ventilation and roof support plans to the MSHA district in which the mine was located for approval prior to operating a mine and were required to submit revised plans to the district for approval at least every 6 months. The district managers were ultimately responsible for approving ventilation and roof support plans submitted to their districts. Generally, districts were required to approve ventilation and roof support plans within 45 days of receipt unless problems are found that must be resolved. In some of the districts we visited, state mine agencies were also required to approve the mine plans. We reviewed this information for a 5-year period, 1998 to 2002, and found that most districts approved these plans on a timely basis. However, MSHA headquarters did not adequately monitor completion of required inspections of ventilation and roof support plans by the district offices. Districts were required to conduct inspections at least once every 6 months of the ventilation and roof support plans in order to ensure that mine operators were following the requirements of the plans and that they were updating the plans to reflect changes in the ventilation and roof support systems. The specialists who reviewed the mine plans during the approval process also conducted many of these inspections. Our analysis of the information submitted by the district offices to MSHA headquarters on the completion of these inspections for the 5-year period from 1998 to 2002 indicated that several districts had not completed the inspections as required. As a result of districts not completing these inspections, some mines may have been operating without adequate ventilation or roof support plans. Inspections of the mines’ ventilation and roof support plans are essential in ensuring adequate airflow and controlling the accumulation of dust particles in underground coal mines as well as ensuring that the roofs are adequately supported. Inadequate ventilation systems or roof support systems can directly affect the safety and health of mine workers. For example, our review of MSHA’s data on fatalities at underground coal mines from 1998 to 2002 showed that problems related to ventilation and roof support systems accounted for high proportions of fatalities in underground coal mines. For this 5-year period, ignitions or explosions from excessive gas or coal dust accounted for the third largest percentage of all fatalities—14 percent—and roof falls accounted for the largest percentage—34 percent. In addition, MSHA did not always effectively coordinate its inspections of mine plans with the comprehensive quarterly inspections of underground coal mines in order to avoid duplication of effort by district staff. In two of the five districts we visited, we found that, in some instances, the specialists who conduct the inspections of mine plans and inspectors who conduct quarterly inspections were duplicating each other’s work, resulting in an inefficient use of MSHA’s resources. MSHA is also responsible for approving plans for containing mine debris, called impoundment plans. As of 2003, MSHA had responsibility for approximately 600 coal impoundments. Many of these plans are extremely complex and require highly qualified engineers who are familiar with technical areas such as dam building techniques, hydrology, and soil conditions. Failure of an impoundment can be devastating to nearby communities, which may be flooded with water and sludge, and to the environment, affecting streams and water supplies for years afterwards. Because of the potential for failure, such as the impoundment dam failure in 1972 in Buffalo Creek, West Virginia, in which 125 people were killed and 500 homes were destroyed, MSHA is extremely careful about approving impoundment plans. At the time of our 2003 report, MSHA had conducted two reviews of its procedures for approving impoundment plans, and has begun to take steps for improving the process. One review identified several weaknesses in the procedures, including the need for the agency to develop guidance for determining which impoundment plans should receive expedited review as well as evaluating the staffing levels needed to ensure timely and complete review of the plans. MSHA officials acknowledged that the delays in the review and approval of impoundment plans had been a problem for a number of years. They also told us that they had taken a number of steps to alleviate these delays, such as hiring additional engineers to review impoundment plans and provide assistance to staff in its district offices. MSHA’s procedures for conducting inspections of underground coal mines were comprehensive; its inspectors were highly qualified; and it conducted almost all quarterly inspections as required, but the agency’s inspection process could be improved in a number of ways. Although MSHA had extensive inspection procedures, some of them were unclear, while others were difficult to locate because they were contained in so many different sources. In addition, MSHA conducted over 96 percent of required quarterly inspections each year over the 10-year period from 1993 to 2002, but MSHA headquarters did not provide adequate oversight to ensure that its district offices followed through to make sure that unsafe conditions identified during inspections were corrected by the deadlines set by inspectors. And, although MSHA had highly qualified inspectors, as of 2003, it had no plan for addressing the fact that about a large percentage of them (44 percent) were going to be eligible to retire within 5 years. Finally, MSHA did not collect all of the information it needed to assess the effectiveness of its enforcement efforts because it did not collect data on contractor staff who work at each mine. Although MSHA had extensive inspection procedures, we found that some of them were unclear and were located in so many different sources that they could be difficult to find. Some procedures did not clearly specify the criteria inspectors should use in citing violations. For example, several district officials in two of the districts we visited told us that the lack of specific criteria for floating coal dust made it difficult to determine what was an allowable level. As a result, mine inspectors had to rely on their own experience and personal opinion to determine if the accumulation of floating coal dust was a safety hazard that constituted a violation. In some instances, according to the inspectors and district managers, this led to inconsistencies in inspectors’ interpretations of the procedures; inspectors have cited violations for levels of floating coal dust that have not brought citations from other inspectors. In addition, the inspections procedures were located in so many different handbooks, manuals, policy bulletins, policy letters, and memorandums that it could be difficult for inspectors to make sure that they were using the most recent guidance and procedures. MSHA headquarters officials told us that they were working to clarify the agency’s procedures and consolidate the number of sources in which they were located. MSHA’s data on its quarterly inspection completion rates indicated that, from fiscal year 1993 to 2002, its district offices completed over 96 percent of these inspections as required. However, MSHA headquarters did not monitor district office performance to ensure that inspectors followed up with mine operators to determine that unsafe conditions identified during these inspections were corrected. The deadlines that inspectors set for mine operators to correct safety and health hazards varied based on a number of factors, including the degree of danger to miners affected by the violation. They ranged from 15 minutes from the time the inspector wrote the citation to 27 days afterwards. MSHA’s procedures required inspectors to follow up with mine operators within the deadline set or to extend the deadline. Inspectors could extend the deadlines under certain circumstances, such as when a mine had temporarily shut down its operations or when a mine operator was unable to obtain a part needed to correct a violation cited for a piece of equipment. Our analysis of MSHA’s data for the 10-year period from 1993 to 2002 showed that, for almost half of the 536,966 citations for which a deadline was established, inspectors did not follow up in a timely manner to make sure mine operators had corrected the hazards. However, as shown in figure 2, of the citations for which the inspectors did not follow up on a timely basis, they followed up on most within 4 days of the deadline and, for all but 11 percent of the citations, they followed up within 14 days. The more serious type of violations—“significant and substantial” violations—accounted for a significant proportion of the citations for which inspectors did not follow up by the deadlines. For the over 235,447 significant and substantial violations from 1993 to 2002 for which a deadline was specified, inspectors did not follow up on more than 48 percent of the citations by the deadline. However, inspectors followed up on all but about 10 percent of the citations for significant and substantial violations within 14 days of the deadline. MSHA headquarters and district officials told us that there were many different reasons why inspectors may not have followed up by the deadlines specified in their citations. One of these, according to several district officials, was scheduling conflicts that prevented inspectors from visiting the mine within the specified deadline. In addition, there were circumstances in which inspectors were not able to follow up, such as when a mine operator suspended a mine’s operations. However, in these instances, the inspector should have updated the database to show that the deadline was extended. In addition, although we found that, as of 2003, about 44 percent of MSHA’s highly trained and experienced underground coal mine inspectors would be eligible to retire within 5 years—and the agency’s historic attrition rates indicated that many of them would actually retire—the agency had not developed a plan for replacing these inspectors. MSHA also had fewer inspector trainees on board than vacancies that would need to be filled when inspectors retired. MSHA headquarters officials told us that it would be difficult for them quickly hire and train replacements for the inspectors who retired. In addition to the fact that at least 18 months were needed to train each new inspector, it took the agency several months from the date an individual retired to advertise and fill each vacant position. As a result of losing these inspectors, MSHA may find it difficult to complete all quarterly inspections of underground coal mines. MSHA also did not collect all of the information on contractor staff who work in underground coal mines needed to assess the effectiveness of its enforcement activities. Because MSHA does not collect information on injuries to or hours worked by contractor staff who mine coal in each underground coal mine, it cannot calculate accurate fatality or nonfatal injury rates for mines that use contractor staff to mine coal—rates used to evaluate the effectiveness of its enforcement efforts. In addition, MSHA could not track trends in fatal or nonfatal injury rates at specific mines to use to target its enforcement resources. The fact that MSHA did not track the number of contractor staff who worked in each mine was important because the proportion of miners who work for contractors had grown significantly since 1981, when they represented only 5 percent of all mine workers. Our analysis showed that the percentage of underground coal miners who work for contractors increased from 13 percent in 1993 to 18 percent in 2002, and the percentage who incurred nonfatal injuries also increased over this period. MSHA had extensive guidance and thorough procedures for conducting accident investigations, but it did not use these investigations to the fullest extent to improve the future safety of mine workers. Although MSHA had detailed policies and rigorous requirements for how investigations must be conducted and reported, weaknesses in its databases made it difficult for MSHA to track key data on mine hazards and potentially useful indicators of its own performance. We made several recommendations in our report designed to improve MSHA’s operations. We recommended that the Secretary of Labor direct the Assistant Secretary for Mine Safety and Health to monitor the timeliness of inspections of ventilation and roof control plans to ensure that all inspections are completed by district offices as required; monitor follow-up actions taken by its district offices to ensure that mine operators are correcting hazards identified during inspections on a timely basis; update and consolidate guidance provided to its district offices on plan approval and inspections to eliminate inconsistencies and outdated instructions, including clarifying guidance on coordinating regular quarterly inspections of mines with other inspections; develop a plan for addressing anticipated shortages in the number of qualified inspectors due to upcoming retirements, including considering options such as streamlining the agency’s hiring process and offering retention allowances; amend the guidance provided to independent contractors engaged in high-hazard activities requiring them to report information on the number of hours worked by their staff at specific mines so that MSHA can use this information to compute the injury and fatality rates used to measure the effectiveness of its enforcement efforts; and revise the systems MSHA uses to collect information on accidents and investigations to provide better data on accidents and make it easier to link injuries, accidents, and investigations. MSHA did not comment on the recommendations in its written response to the report and disagreed with some of our findings. However, MSHA later agreed to implement all of the recommendations and provided us with information on how it had implemented or was in the process of implementing them. We are pleased that MSHA has taken action to implement these recommendations but note that we have not examined the effectiveness of the agency’s actions or the extent to which these actions have addressed the issues we reported in 2003. For further information, please contact Robert E. Robertson at (202) 512- 7215. Individuals making key contributions to this testimony include Revae Moran and Karen Brown. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Chairman, Subcommittee on Labor, HHS and Education, Senate Committee on Appropriations, asked GAO to submit a statement for the record highlighting findings from our 2003 report on how well the Department of Labor's Mine Safety and Health Administration (MSHA) oversees its process for reviewing and approving critical types of mine plans and the extent to which MSHA's inspections and accident investigations processes help ensure the safety and health of underground coal miners. As of 2003, to help ensure the safety and health of underground coal miners, MSHA staff reviewed and approved mine plans, conducted inspections, and investigated serious accidents. In these three areas, MSHA had extensive procedures and qualified staff. However, we concluded that MSHA could improve its oversight, guidance, and human-capital-planning efforts. We found that MSHA was not effectively monitoring a few key areas. MSHA headquarters did not ensure that 6-month inspections of ventilation and roof support plans were being completed on a timely basis. This failure could have led to mines operating without up-to-date plans or mine operators not following all requirements of the plans. Additionally, MSHA officials did not always ensure that hazards found during inspections were corrected promptly. Gaps were found in the information that MSHA used to monitor fatal and nonfatal injuries, limiting trend analysis and agency oversight. Specifically, the agency did not collect information on hours worked by independent contractors staff needed to compute fatality and nonfatal injury rates for specific mines, and it was difficult to link information on accidents at underground coal mines with MSHA's investigations. We also concluded that the guidance provided by MSHA management to agency employees could be strengthened. Some inspections procedures were unclear and were contained in many sources, leading to differing interpretations by mine inspectors. The guidance on coordinating inspections conducted by specialists and regular inspectors was also unclear, resulting in some duplication of effort. Finally, as of 2003, although about 44 percent of MSHA's underground coal mine inspectors were going to be eligible to retire within 5 years, the agency had no plan for replacing them or using other human capital flexibilities available to retain its highly qualified and trained inspectors. |
CBP has divided geographic responsibility for the southwest border among nine Border Patrol sectors, as shown in figure 1 (see app. II for general information about Border Patrol sectors). Each sector has a varying number of stations, with agents responsible for patrolling within defined geographic areas. Within these areas, Border Patrol has reported that its primary mission is to prevent terrorists and weapons of terrorism from entering the United States and also to detect, interdict, and apprehend those who attempt to illegally enter or smuggle any person or contraband across the nation’s borders. Move mouse over the sector name to learn more about the sector. Each Border Patrol sector is further divided into stations. For example, the Tucson sector has divided geographic responsibility across eight stations, seven of which have responsibility for miles of land directly on the U.S.-Mexico border. Within the station areas Border Patrol refers to “border zones”—those having international border miles—and “interior zones”—those without international border miles. According to Border Patrol officials, zones allow sectors to more effectively analyze border conditions, including terrain, when planning how to deploy agents. Zone dimensions are largely determined by geography and topographical features, and zone size can vary significantly. See figure 2 for Tucson sector station and zone boundaries (see app. III for general information about the Tucson sector stations). Move mouse over the station name to learn more about the station. Border Patrol collects and analyzes various data on the number and types of entrants who illegally cross the southwest border between the land border POEs, including collecting estimates on the total number of identified—or “known”—illegal entries. Border Patrol collects these data composed of the total number of apprehensions, turn backs, and got aways as an indicator of the potential border threat across locations. Border Patrol reported that for nearly two-thirds of the remaining 1,120 southwest border miles, resources were in place to achieve a high probability to detect illegal activity, but the ability to respond may be compromised by insufficient resources or inaccessible terrain; while for the remaining border miles, insufficient resources or infrastructure inhibited detection or apprehension of illegal activity. its Fiscal Year 2010-2012 Annual Performance Report. DHS established an interim performance measure until a new border control goal and measure could be developed. Border Patrol issued its new 2012-2016 Strategic Plan in May 2012, stating that the buildup of its resource base and the operations conducted over the past two decades would enable the Border Patrol to focus on mitigating risk rather than increasing resources to secure the border. In contrast to the 2004 Strategy, which also recognized the importance of rapid mobility, the leveraging of partnerships, and accurate and useful intelligence, the new strategic plan places a greater emphasis on the integration of partner resources into operational planning and enforcement efforts, particularly partners external to DHS. (See app. IV for strategic goals and objectives presented in Border Patrol’s 2004 Strategy and 2012-2016 Strategic Plan.) Border Patrol apprehensions have decreased in the Tucson sector and across the southwest border, and DHS has reported data meeting its goal to secure the land border with a decrease in apprehensions. The decrease in apprehensions mirrored the decrease in estimated known illegal entries within each southwest border sector. Border Patrol officials attributed the decrease in apprehensions and estimated known illegal entries within southwest border sectors to multiple factors, including changes in the U.S. economy. While changes in apprehension levels provide useful insight on activity levels, other types of data may also inform changes in the status of border security, including changes in the percentage of estimated known illegal entries who are apprehended and who repeatedly cross the border illegally (recidivist rate), increases in seizures of drugs and other contraband, and increases in apprehensions of aliens from special interest countries (ASIC) that have been determined to be at a potential increased risk of sponsoring terrorism. Since fiscal year 2011, DHS has used changes in the number of apprehensions on the southwest border between POEs as an interim measure for border security as reported in its Annual Performance Report. In fiscal year 2011, DHS reported data meeting its goal to secure the land border with a decrease in apprehensions. These data show that Border Patrol apprehensions within each southwest Border Patrol sector decreased from fiscal years 2006 to 2011, generally mirroring the decrease in estimated known illegal entries within each sector. In the Tucson sector, our analysis of Border Patrol data showed that apprehensions decreased by 68 percent from fiscal years 2006 to 2011, compared with a 69 percent decrease in estimated known illegal entries, as shown in figure 3. (See app. V for additional information.) Border Patrol officials attributed the decrease in apprehensions and estimated known illegal entries within southwest border sectors to multiple factors, including changes in the U.S. economy and successful achievement of its strategic objectives. Border Patrol’s ability to address objectives laid out in the 2004 Strategy was strengthened by increases in personnel and technology, and infrastructure enhancements, according to Border Patrol officials. For example, Tucson sector Border Patrol officials said that the sector increased manpower over the past 5 years through an increase in Border Patrol agents that was augmented by National Guard personnel, and that CBP’s Secure Border Initiative (SBI) provided border fencing and other infrastructure, as well as technology enhancements. Border Patrol officials also attributed decreases in estimated known illegal entries and apprehensions to the deterrence effect of CBP consequence programs—programs intended to deter repeated illegal border crossings by ensuring the most efficient consequence or penalty for individuals who illegally enter the United States. One such multiagency initiative, Streamline, is a criminal prosecutions program targeting aliens who illegally enter the United States through designated geographic locations. Border Patrol collects other types of data that are used by sector management to help inform assessment of its efforts to secure the border against the threats of illegal migration, smuggling of drugs and other contraband, and terrorism. These data show changes in the (1) percentage of estimated known illegal entrants who are apprehended, (2) percentage of estimated known illegal entrants who are apprehended more than once (repeat offenders), (3) number of seizures of drugs and other contraband, and (4) number of apprehensions of persons from countries at an increased risk of sponsoring terrorism. In addition, apprehension and seizure data can be analyzed in terms of where they occurred relative to distance from the border as an indicator of progress in Border Patrol enforcement efforts. Border Patrol officials at sectors we visited, and our review of fiscal years 2010 and 2012 sector operational assessments, indicated that sectors have historically used these types of data to inform tactical deployment of personnel and technology to address cross-border threats; however, the agency has not analyzed these data at the national level to inform strategic decision making, according to Border Patrol headquarters officials. These officials stated that greater use of these data in assessing border security at the national level may occur as the agency transitions to the new strategic plan. The 2004 Strategy recognized that factors in addition to apprehensions can be used to assess changes in Border Patrol’s enforcement efforts to secure the border, including changes in the percentage of estimated known illegal entrants who are apprehended (apprehensions as a percentage of estimated known illegal entrants), and changes in the number and percentage of apprehensions made closer to the border. Border Patrol headquarters officials said that the percentage of estimated known illegal entrants who are apprehended is primarily used to determine the effectiveness of border security operations at the tactical— or zone—level but can also affect strategic decision making. The data are also used to inform overall situational awareness at the border, which directly supports field planning and redeployment of resources. Our analysis of Border Patrol data for the Tucson sector showed little change in the percentage of estimated known illegal entrants who were apprehended by the Border Patrol over the past 5 fiscal years. Specifically, our analysis showed that of the total number of estimated known aliens who illegally crossed the Tucson sector border from Mexico each year, Border Patrol apprehended 62 percent in fiscal year 2006 compared with 64 percent in fiscal year 2011, an increase of about 2 percentage points. Results varied across other southwest border sectors, as shown in appendix V. Over the last fiscal year, however, Border Patrol apprehensions across the southwest border and in the Tucson sector have occurred closer to the border. In the Tucson sector, for example, the percentage of apprehensions occurring more than 20 miles from the border was smaller in fiscal year 2011 than in fiscal year 2010, while a greater percentage of apprehensions in fiscal year 2011 occurred more than 5 to 20 miles from the border, as shown in figure 4. There was little change in the percentage of apprehensions within 1 mile of the border. Similarly, apprehensions across the southwest border have also moved closer to the border over time, with the greatest percentage of apprehensions occurring more than 5 to 20 miles from the border in fiscal year 2011. (See app. VI for additional information.) Of the 13 ranchers we spoke or corresponded with in the Tucson sector, 6 said they would like to see Border Patrol enforce closer to the border to prevent illegal entry and trespass on their properties. Generally, these ranchers indicated that the level of illegal migrants coming across their properties had declined, but said the level of drug smuggling had remained constant. They were most concerned about safety, but cited considerable property damage and concerns that illegal trafficking had affected land values and driven up costs in the ranching industry. Border Patrol officials in the Tucson sector said that some factors precluding greater border presence included terrain that was inaccessible or created a tactical disadvantage, the distance from Border Patrol stations to the border, and access to ranches and lands that were federally protected and environmentally sensitive. Border Patrol officials also said they have taken steps to address factors that prevent closer access to the border, such as establishing forward operating bases—permanent facilities in remote locations near the border—and substations closer to the border, and working with ranchers and the federal government to ensure access to protected lands. The 2004 Strategy stated that changes in the percentage of persons apprehended who have repeatedly crossed the border illegally (referred to as the recidivism rate) is a factor that Border Patrol considers in assessing its ability to deter individuals from attempting to illegally cross the border. Our analysis of Border Patrol apprehension data showed that the recidivism rate has declined across the southwest border by about 6 percentage points from fiscal year 2008 to 2011 in regard to the number of apprehended aliens who had repeatedly crossed the border in the prior 3 years. Specifically, our analysis showed that the recidivism rate across the overall southwest border was about 42 percent in fiscal year 2008 compared with about 36 percent in fiscal year 2011. The Tucson sector had the third highest recidivism rate across the southwest border in fiscal year 2011, while the highest rate of recidivism occurred in El Centro sector, as shown in figure 5. According to Border Patrol headquarters officials, the agency has implemented various initiatives designed to address recidivism through increased prosecution of individuals apprehended for crossing the border illegally. The 2004 Strategy identifies the detection, apprehension, and deterrence of smugglers of drugs, humans, and other contraband as a primary objective. Border Patrol headquarters officials said that data regarding seizures of drugs and other contraband are good indicators of the effectiveness of targeted enforcement operations, and are used to identify trends in the smuggling threat and as indicators of overall cross-border illegal activity, in addition to potential gaps in border coverage, risk, and enforcement operations. However, these officials stated that these data are not used as a performance measure for overall border security because while the agency has a mission to secure the border against the smuggling threat, most smuggling is related to illegal drugs, and that drug smuggling is the primary responsibility of other federal agencies, such as the Drug Enforcement Administration and U.S. Immigration and Customs Enforcement, Homeland Security Investigations. Our analysis of Border Patrol data indicated that across southwest border sectors, seizures of drugs and other contraband increased 83 percent over the past 5 fiscal years, with drug seizures accounting for the vast majority of all contraband seizures. Specifically, the number of drug and contraband seizures increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011. Most seizures of drugs and other contraband occurred in the Tucson sector, with about 28 percent, or 5,299, of the 18,898 southwest border seizures occurring in the sector in fiscal year 2011, as shown in figure 6. Further analysis of these data in the Tucson sector showed that the percentage of drugs and other contraband seized closer to the border— 5 miles or less—decreased slightly from fiscal year 2010 to fiscal year 2011. Specifically, the Tucson sector made 42 percent of drug and other contraband seizures within 5 miles of the border in fiscal year 2010, and 38 percent within 5 miles of the border in fiscal year 2011. Across other southwest border sectors, the distance from the border where seizures occurred varied, as shown in figure 7. For example, about 49 percent of the seizures in the El Centro sector occurred within 1 mile of the border in fiscal year 2011 compared with less than 7 percent of seizures within 1 mile of the border in the El Paso sector. Border Patrol headquarters officials stated that variances in data across sectors reflect geographical and structural differences among Border Patrol sectors—each sector is characterized by varying topography, unique ingress and egress routes, land access issues, and differing technology and infrastructure deployments, all of which affect how a sector operates and therefore the ability to make seizures at or near the border. The 2004 Strategy identified the detection and prevention of terrorists and their weapons from entering the United States between the ports of entry as a primary objective. ASICs are considered to pose a greater potential risk for terrorism than other aliens, and Border Patrol headquarters officials said that they collect data on the number of ASIC apprehensions in accordance with the reporting and documentation procedures outlined in policy and guidance. However, Border Patrol headquarters officials stated that they did not consider changes in the number of ASICs apprehended in their assessment of border security because until recently, they had been primarily focused on reducing the overall number of illegal entries, and that terrorism was addressed by multiple agencies besides the Border Patrol, including the Federal Bureau of Investigation within the Department of Justice. Our analysis of Border Patrol data showed that apprehensions of ASICs across the southwest border increased each fiscal year from 239 in fiscal 2006 to 399 in fiscal year 2010, but dropped to 253 in fiscal year 2011. The Rio Grande Valley sector had more than half of all ASIC apprehensions across the southwest border in both fiscal years 2010 and 2011, as shown in figure 8. Further analysis of these data showed differences in progress to apprehend ASICs closer to the border in support of Border Patrol’s overall intention to prevent potential terrorist threats from crossing U.S. borders. For example, Rio Grande Valley sector nearly doubled the percentage of ASICs apprehended within 1 mile of the border from the preceding fiscal year, from 26 percent in fiscal year 2010 to 48 percent in fiscal year 2011. In contrast, ASIC apprehensions within 1 mile of the border in Tucson sector decreased from 26 percent in fiscal 2010 to 8 percent in fiscal year 2011. Across the southwest border, the greatest percentage of ASICs was apprehended more than 20 miles from the border in fiscal year 2011, as shown in figure 9. Border Patrol headquarters officials said they are transitioning to a new methodology to identify the potential terrorist risk in fiscal year 2013. This new methodology will replace the use of a country- specific list with a range of other factors to identify persons posing an increased risk for terrorism when processing deportable aliens. The Tucson sector scheduled a higher percentage of agent workdays to enforcement activities related to patrolling the border than other southwest border sectors in fiscal year 2011. However, until recently sectors have differed in how they collect and report data that Border Patrol used to assess its overall effectiveness in using resources to secure the border, precluding comparison across sectors. In September 2012, Border Patrol issued new guidance on standardizing data collection and reporting practices that could increase data reliability and allow comparison across locations. Border Patrol’s 2004 Strategy provided for increasing resources and deploying these resources using an approach that provided for several layers of Border Patrol agents at the immediate border and in other areas 100 miles or more away from the border (referred to as defense in depth). According to CBP officials, as resources increased, Border Patrol sought to move enforcement closer to the border over time to better position the agency to ensure the arrest of those trying to enter the country illegally. Headquarters and field officials said station supervisors determine (1) whether to deploy agents in border zones or interior zones, and (2) the types of enforcement or nonenforcement activities agents are to perform. Border Patrol officials from the five sectors we visited stated that they used similar factors in making deployment decisions, such as intelligence showing the presence of threat across locations, the nature of the threat, and environmental factors including terrain and weather. Our analysis of Border Patrol data showed differences across sectors in the percentage of agent workdays scheduled for border zones and interior zones in fiscal year 2011. Specifically, our analysis showed that while Tucson sector scheduled 43 percent of agent workdays to border zones in fiscal year 2011, agent workdays scheduled for border zones by other southwest border sectors ranged from 26 percent in the Yuma sector to 53 percent in the El Centro sector, as shown in figure 10. Border Patrol officials attributed the variation in border zone deployment to differences in geographical factors among the southwest border sectors—such as varying topography, ingress and egress routes, and land access issues, and structural factors such as technology and infrastructure deployments—and stated that these factors affect how sectors operate and may preclude closer deployment to the border. Additionally, many southwest border sectors have interior stations that are responsible for operations at some distance from the border, such as at interior checkpoints generally located 25 miles or more from the border, which could also affect their percentage of agent workdays scheduled for border zones. Southwest border sectors scheduled most agent workdays for enforcement activities during fiscal years 2006 to 2011 and the activity related to patrolling the border accounted for a greater proportion of enforcement activity workdays than any of the other activities. Sectors schedule agent workdays across various activities categorized as enforcement or nonenforcement. Across enforcement activities, our analysis of Border Patrol data showed that all sectors scheduled more agent workdays for “patrolling the border”—activities defined to occur within 25 miles of the border—than any other enforcement activity, as shown in figure 11. Border Patrol duties under this activity include patrolling by vehicle, horse, and bike; patrolling with canines; performing sign cutting; and performing special activities such as mobile search and rescue. Other enforcement activities to which Border Patrol scheduled agent workdays included conducting checkpoint duties, developing intelligence, and performing aircraft operations. (See app. VII for a listing of nonenforcement activities.) Border Patrol sectors and stations track changes in their overall effectiveness as a tool to determine if the appropriate mix and placement of personnel and assets are being deployed and used effectively and efficiently, according to officials from Border Patrol headquarters. Border Patrol calculates an overall effectiveness rate using a formula in which it adds the number of apprehensions and turn backs in a specific sector and divides this total by the total estimated known illegal entries— determined by adding the number of apprehensions, turn backs, and got aways for the sector. Border Patrol sectors and stations report this overall effectiveness rate to headquarters. Border Patrol views its border security efforts as increasing in effectiveness if the number of turn backs as a percentage of estimated known illegal entries has increased and the number of got aways as a percentage of estimated known illegal entries has decreased. Our analysis of Tucson sector apprehension, turn back, and got away data from fiscal years 2006 through 2011 showed that while Tucson sector apprehensions remained fairly constant at about 60 percent of estimated known illegal entries, the percentage of reported turn backs increased from about 5 percent to about 23 percent, while the percentage of reported got aways decreased from about 33 percent to about 13 percent, as shown in figure 12. As a result of these changes in the mix of turn backs and got aways, Border Patrol data showed that enforcement effort, or the overall effectiveness rate for Tucson sector, improved 20 percentage points from fiscal year 2006 to fiscal year 2011, from 67 percent to 87 percent. (See app. VIII for additional information.) Border Patrol data showed that the effectiveness rate for eight of the nine sectors on the southwest border improved from fiscal years 2006 through 2011. The exception was the Big Bend sector, which showed a decrease in the overall effectiveness rate, from 86 percent to 68 percent, during this time period. Border Patrol headquarters officials said that differences in how sectors define, collect, and report turn back and got away data used to calculate the overall effectiveness rate preclude comparing performance results across sectors. Border Patrol headquarters officials stated that until recently, each Border Patrol sector decided how it would collect and report turn back and got away data, and as a result, practices for collecting and reporting the data varied across sectors and stations based on differences in agent experience and judgment, resources, and terrain. In terms of defining and reporting turn back data, for example, Border Patrol headquarters officials said that a turn back was to be recorded only if it is perceived to be an “intended entry”—that is, the reporting agent believed the entrant intended to stay in the United States, but Border Patrol activities caused the individual to return to Mexico. According to Border Patrol officials, it can be difficult to tell if an illegal crossing should be recorded as a turn back, and sectors have different procedures for reporting and classifying incidents. In terms of collecting data, Border Patrol officials reported that sectors rely on a different mix of cameras, sign cutting, credible sources, and visual observation to identify and report the number of turn backs and got aways.additional information.) (See app. IX for According to Border Patrol officials, the ability to obtain accurate or consistent data using these identification sources depends on various factors, such as terrain and weather. For example, data on turn backs and got aways may be understated in areas with rugged mountains and steep canyons that can hinder detection of illegal entries. In other cases, data may be overstated—for example, in cases where the same turn back identified by a camera is also identified by sign cutting. Double counting may also occur when agents in one zone record as a got away an individual who is apprehended and then reported as an apprehension in another zone. As a result of these data limitations, Border Patrol headquarters officials said that while they consider turn back and got away data sufficiently reliable to assess each sector’s progress toward border security and to inform sector decisions regarding resource deployment, they do not consider the data sufficiently reliable to compare—or externally report—results across sectors. Border Patrol headquarters officials issued guidance in September 2012 to provide a more consistent, standardized approach for the collection and reporting of turn back and got away data by Border Patrol sectors. Each sector is to be individually responsible for monitoring adherence to the guidance. According to Border Patrol officials, it is expected that once the guidance is implemented, data reliability will improve. This new guidance may allow for comparison of sector performance and inform decisions regarding resource deployment for securing the southwest border. Border Patrol does not yet have performance goals and measures in place necessary to define border security and determine the resources necessary to achieve it. Border Patrol officials said that they had planned to establish such goals and measures by fiscal year 2012, but these efforts have been delayed, and are contingent on developing and implementing key elements of its strategic plan. Further, Border Patrol is in the process of developing a plan for implementing key elements of the 2012-2016 Strategic Plan that may be used to inform resource needs across locations, and expects to begin developing a process for assessing resource needs and informing deployment decisions across the southwest border once key elements of its strategic plan have been implemented in fiscal years 2013 and 2014. Border Patrol officials stated that the agency is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between POEs and for informing the identification and allocation of resources needed to secure the border, but has not identified milestones and time frames for developing and implementing them. Since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between POEs as an interim performance goal and measure for border security as reported in its Annual Performance Report. In February 2011, we testified that DHS intended to use this indicator as an interim performance goal and measure until it completed development of new border control performance goals and measures, which DHS officials expected to be in place by fiscal year 2012. However, as of September 2012, DHS had not yet issued new performance goals and measures for assessing border security or identified revised milestones and time frames for developing and implementing them. We previously testified that the interim goal and measure of number of apprehensions on the southwest border between POEs provides information on activity levels, but it does not inform program results or resource identification and allocation decisions, and therefore until new goals and measures are developed, DHS and Congress could experience reduced oversight and DHS accountability. Further, studies commissioned by CBP have documented that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers with the amount of cross-border illegal activity. Border Patrol officials stated that DHS and Border Patrol have established a performance goal—linked to relevant measures—addressing border security that, as of October 2012, was being used as an internal management indicator. However, a DHS official said it has not been decided whether this goal and the associated measures will be publicly reported or used as an overall performance goal and measures for border security. to ensure results are achieved.project planning—such as identifying time frames—to be performed in the early phases of a program and recognize that plans may need to be adjusted along the way in response to unexpected circumstances. Time frames for implementing key elements of the 2012-2016 Strategic Plan can change; however, milestones and time frames for the development of performance goals and measures could help ensure that goals and measures are completed in a timely manner. Moreover, milestones and time frames could better position CBP to monitor progress in developing and implementing goals and measures, which would provide DHS and Congress with information on the results of CBP efforts to secure the border between POEs and the extent to which existing resources and capabilities are appropriate and sufficient. Border Patrol headquarters officials stated that they were in the process of developing a plan for implementing key elements of the 2012-2016 Strategic Plan that may be used to inform resource needs across locations, and expect to begin developing a process for assessing resource needs and informing deployment decisions across the southwest border once those key elements have been implemented. Border Patrol officials said that they planned to develop and implement key elements of the new strategic plan in fiscal years 2013 and 2014. According to Border Patrol officials, the Border Patrol 2012-2016 Strategic Plan identifies several key elements that are to inform agency resource needs and deployment decisions. Border Patrol officials reported in September 2012 that they were in the process of developing an implementation plan that is to lay out how key elements of the new strategic plan are to be implemented. Border Patrol officials reported that, in general, key elements of the strategic plan are to be developed and implemented during fiscal years 2013 and 2014. According to agency officials, key strategic plan elements to be addressed by the implementation plan that are to inform agency resource needs and deployment decisions include (1) a process for identifying risk that is to inform resource decisions, (2) the enhancement of mobile response capabilities to redeploy resources to address the shifts in threat, and (3) an approach to integrate partner resources and contributions to enhance Border Patrol capabilities (“whole-of-government” approach). Border Patrol officials told us that these elements are interdependent and must be developed, refined, and disseminated to the field to strengthen the effectiveness of the new strategic plan. According to these officials, delays in the development of one element would likely affect the development of others. For example, delays in implementing the new risk assessment tools could affect sectors’ ability to identify appropriate responses to changing levels of risk. Risk assessment tools. In September 2012, Border Patrol officials said they were in the process of developing two tools that are to be used in the field to identify and manage risk under the agency’s new risk management approach. The first tool for assessing risk is the Operational Implementation Plan (OIP), a qualitative process that prioritizes sector evaluations of border security threats and identifies potential responses. Border Patrol is developing a second tool—a quantitative model called the Integrated Mission Analysis Tool (IMAT)—that is to, among other things, assess risk and capability by predicting and identifying the need for various courses of action, such as the rapid response of resources to the highest risks. Actions are to be assessed based on a comparison of agency capability with risk. In contrast to the OIP, the IMAT is to be completed at the zone level by stations; consolidated station outputs may then be used by sectors to inform the OIP process. The IMAT is to use data from various sources to develop a “Border Assessment of Threat” of known or potential threats by zone and compare that assessment with a point-in-time operational assessment of each sector’s capability to determine to what extent current capability—including resources—matches the perceived risk. On the basis of the outcome, the station can then choose from various predetermined courses of action to address the perceived level of risk, such as reallocating resources or leveraging external—law enforcement partner—resources. Once the IMAT is fully implemented, Border Patrol plans for the resulting outputs to be used to reassess and inform OIP decision making; information from both systems is to be used to inform resource needs and deployment decisions after the 2012-2016 Strategic Plan has been implemented. According to Border Patrol officials, both the OIP and the IMAT are to identify risk and potential responses at the sector level. However, these tools will not allow Border Patrol to assess and prioritize risks and response options across sectors. Moreover, agency officials said that when the IMAT is fully deployed, in fiscal year 2014, it will not have the capacity to differentiate among threats related to terrorists and their weapons, drugs and other illegal contraband, and illegal migration (such as recidivism, in which individuals repeatedly cross the border illegally). Border Patrol officials said the agency plans to explore mechanisms for developing these capabilities— assessing risk across sectors and differentiating threat—once OIP and IMAT have been developed and implemented in fiscal year 2014. According to Border Patrol headquarters officials, as of August 2012, the agency was in the process of pilot testing the OIP and the IMAT in the field and expected to begin to initially implement the OIP and populate the IMAT through a web-based program that will record baseline data on threat and operational conditions throughout fiscal year 2013. Rapid deployment of resources. A second key element of the 2012- 2016 Strategic Plan is to increase mobility and rapid deployment of personnel and resources to quickly counter and interdict threats based on shifts in smuggling routes and tactical intelligence. As we testified in May 2012, CBP reported expanding the training and response capabilities of the Border Patrol’s specialized response teams to support domestic and international intelligence-driven and antiterrorism efforts as well as other special operations. Additionally, Border Patrol officials stated that in fiscal year 2011, Border Patrol allocated 500 agent positions to provide a national group of organized, trained, and equipped Border Patrol agents who are capable of rapid movement to regional and national incidents in support of high-priority CBP missions. However, we testified in May 2012 that Border Patrol officials had not fully assessed to what extent the redeployment of existing resources would be sufficient to meet security needs, or when additional resources would need to be requested. In September 2012, Border Patrol officials said they had not yet developed a process for assessing the need for, or implementation of, rapid deployment of existing resources to mitigate changing risk levels along the border, but expected to do so after programs and processes—key elements—identified in the strategic plan have been more fully developed. In the interim, deployment decisions—such as the redeployment of agents and mobile technology to border areas identified as having greater, or unacceptable, levels of risk—are to be made at the sector level. Integrated partner resources. A third key element of the 2012-2016 Strategic Plan is the capability of Border Patrol and federal, state, local, and international partners working together to quickly and appropriately respond to changing threats through the timely and effective use of personnel and other resources. According to the new strategic plan, this “whole of government” approach will be achieved through various efforts, including the expansion of operational integration (the combining of best practices, capabilities and strategies among partners) and jointly planned targeted operations (the leveraging of combined partner assets to address risks), the development and fusion of intelligence, and the creation of integrated partnerships (the sharing of resources, plans, and operations among partners). In December 2010, we recommended that CBP develop policy and guidance necessary to identify, assess, and integrate available partner resources in its operational assessments and resource planning documents. CBP concurred with this recommendation, but as of June 2012, Border Patrol had not yet required partner resources to be incorporated into operational assessments or into documents that inform the resource planning process. Border Patrol headquarters officials said that the agency has yet to finalize interim milestones for integrating partner resources into Border Patrol operational assessments and resource planning documents because it is still in the process of determining how partner resources are to be integrated; however, Border Patrol plans to have a process in place for that purpose in fiscal year 2014. According to Border Patrol officials, since the beginning of fiscal year 2011, as the agency began transitioning from the 2004 resource-based strategy to the 2012-2016 risk-based strategic plan, the Border Patrol has been using an interim process for assessing the need for additional personnel, infrastructure, and technology in agency sectors. Border Patrol officials said that resource needs using this interim process are intended to maintain the current status of border security, and will be used until key elements of the strategic plan—such as the OIP and the IMAT—that are necessary to develop a new process have been implemented in fiscal years 2013 and 2014. Under this interim process, Border Patrol has maintained, with some exceptions, personnel and resource levels established in fiscal year 2010, the last year in which operational control was used as a performance goal and measure for border security. According to Border Patrol officials, under the new risk management approach, the need for additional resources will be determined in terms of unacceptable levels of risk caused by illegal activity across border locations. Moreover, in considering ways to mitigate elevated risk levels, Border Patrol will look to mechanisms other than resource enhancement for expanding capacity, such as the rapid redeployment of resources from locations with lower risk levels and the leveraging of partner resources (i.e., a “whole of government” approach). Border Patrol officials said that use of the new risk assessment tools—the OIP and the IMAT—in making decisions for resource requests will be made at the sector level. Until a new process for identifying resource needs has been developed, sectors will continue to use annual operational assessments to reflect specific objectives and measures for accomplishing annual sector priorities, as well as identifying minimum budgetary requirements necessary to maintain the current status of border security in each sector. Border Patrol headquarters officials said that the resource levels established at the end of fiscal year 2012 are to serve as a baseline against which future needs are assessed, and that the personnel and infrastructure in place across the southwest border by the end of fiscal year 2012 should be sufficient to support the agency’s transition to a risk- based strategy for securing the border. Key elements—such as the OIP and the IMAT—of the strategic plan are necessary to evaluate the need for resources; until these elements are in place, Border Patrol sectors are to continue to request resources they have identified as necessary to maintain the current status of border security. However, our review of Border Patrol’s fiscal year 2012 operational assessments showed that sectors have continued to show concerns about resource availability. For example, all nine southwest border sectors reported a need for new or replacement technology to detect and track illegal activity, six southwest border sectors reported a need for additional infrastructure (such as all- weather roads), and eight southwest border sectors reported a need for additional agents to maintain or attain an acceptable level of border security. Border Patrol officials stated that at the time these operational assessments were developed—in fiscal year 2011—the agency had yet to transition to the new risk-management approach under the 2012-2016 Strategic Plan and sectors were continuing to assess resource needs according to the 2004 resource-based model. According to these officials, Border Patrol has determined that for fiscal year 2013 resource levels for most of the southwest border will remain constant, with the exception of the Tucson and Rio Grande Valley sectors, because of budget constraints. Border Patrol officials stated that the agency recognizes the need to develop a new process for assessing resource needs under the new risk management focus of the 2012-2016 Strategic Plan and that this process will be different from the prior system, which focused on increasing resources and activities at the border rather than using existing resources to manage risk. As Border Patrol is in the initial stages of developing and implementing the key elements of its 2012-2016 Strategic Plan, it is too early to assess how Border Patrol will identify the level of resources needed to secure the border under the new plan. Securing the nation’s borders against the evolving threat of terrorism and transnational crime is essential to the protection of the nation. Recognizing the importance of establishing secure national borders, DHS has dramatically increased resources and activities at the southwest border over the past several years to deter illegal border crossings and secure the border. With increased levels of resources and activities now in place, Border Patrol intends to transition from a resource-based approach to securing the nation’s borders to a risk management approach that seeks to leverage existing resources to manage risk. Given the nation’s ongoing need to identify and balance competing demands for limited resources, linking necessary resource levels to desired outcomes is critical to informed decision making. Accordingly, milestones and time frames— established as soon as possible—for the development of performance goals that define the levels of security—or risk—to be achieved at the border could help ensure that goals are developed in a timely manner. The establishment of such goals could help guide future border investment and resources decisions. Similarly, milestones and time frames for developing and implementing performance measures under the new strategic plan that are linked to the Border Patrol’s goal for securing the border could better ensure accountability and oversight of the agency’s programs by better positioning it to show progress in completing its efforts. Once established, border security performance goals and measures would also support Border Patrol’s efforts to assess whether the key elements—programs and processes—of its new strategic plan have brought the agency closer to its strategic goal of securing the border. To support the implementation of Border Patrol’s 2012-2016 Strategic Plan and identify the resources needed to achieve the nation’s strategic goal for securing the border, we recommend that the Commissioner of Customs and Border Protection ensure that the Chief of the Office of Border Patrol establish milestones and time frames for developing a performance goal, or goals, for border security between the POEs that defines how border security is to be measured and a performance measure, or measures—linked to a performance goal or goals—for assessing progress made in securing the border between POEs and informing resource identification and allocation efforts. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reproduced in full in appendix X, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendations for the agency to establish milestones and time frames for developing performance goals and measures for border security between the POEs, and stated that it plans to establish such milestones and time frames by November 30, 2013. Establishing these milestones and time frames would meet the intent of our recommendations, but doing so as soon as possible, as we reported, would better position CBP to monitor progress in developing and implementing goals and measures, which would provide DHS and Congress with information on the results of CBP efforts to secure the border between POEs and the extent to which existing resources and capabilities are appropriate and sufficient. Further, DHS indicated that Border Patrol cannot unilaterally develop a performance goal for border security and define how it is to be measured, but can develop performance goals that will likely become key components of an overarching goal for border security. Since our recommendations were directed at Border Patrol establishing milestones and time frames for developing such goals and measures focused on border security between the POEs, we believe that DHS’s proposed actions for Border Patrol in this area would meet the intent of our recommendations, as Border Patrol has primary responsibility for securing the border between POEs. Such actions would help provide oversight and accountability for border security between the POEs, support the implementation of Border Patrol’s 2012-2016 Strategic Plan, and help identify the resources needed to achieve the goal for securing the border. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security and interested congressional committees, as appropriate. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XI. The report addresses the following three questions: What do data show about apprehensions across the southwest border, and in the Tucson sector in particular, and what other types of data, if any, does Border Patrol collect that inform changes in the status of border security? How does the Tucson sector schedule agent deployment compared with deployment in other southwest border sectors and to what extent do the data show these deployments have been effective in securing the border? To what extent has Border Patrol developed mechanisms to identify resources needed to secure the border under its new strategic plan? In conducting our work, we gathered information and interviewed officials from the Department of Homeland Security’s (DHS) U.S. Customs and Border Protection (CBP) and the Office of Border Patrol. Specifically, we analyzed agency data related to Border Patrol performance and cross- border threats; policy, planning, and budget documents; sector operational assessments; border security reports; operations manuals; and strategic plans provided by Border Patrol. We interviewed Border Patrol headquarters officials regarding data collection and analysis procedures, strategic planning, operational assessments, and border security programs and activities. We obtained relevant data from DHS and Border Patrol databases for fiscal years 2006 through 2011. We chose this time period because fiscal year 2006 was the first full year for which data were available following Border Patrol’s implementation of its 2004 National Border Patrol Strategy (2004 Strategy). To assess the reliability of these data, we spoke with Border Patrol headquarters officials who oversee the maintenance and analyses of the data and with select sector and station officials regarding guidance and processes for collecting and reporting data in regard to apprehensions of illegal entrants, seizures of drugs and other contraband, and scheduling the deployment of agents tracked in a Border Patrol database. We determined that these data were sufficiently reliable for the purposes of this report. We conducted visits to five of the nine Border Patrol sectors on the southwest border—San Diego sector, California; Yuma sector, Arizona; Tucson sector, Arizona; El Paso sector, Texas; and Rio Grande Valley sector, Texas. We selected these sectors based on differences in (1) the level of threat as defined by Border Patrol data, (2) agency priorities for resource deployment, (3) the level of operational control achieved in fiscal year 2010, (4) the use of enforcement strategies deemed successful by the Border Patrol in reducing cross-border illegal activity, and (5) varied terrain. Within these sectors we selected 21 Border Patrol stations to visit based on factors such as the level of cross-border illegal activity as defined by Border Patrol data and unique characteristics such as terrain and topography. We visited both “border stations”—those having international border miles—and “interior stations”—those without international border miles. Because Border Patrol officials identified the Tucson sector as the highest-priority sector for resource deployment in fiscal year 2011 and it had the highest level of cross-border illegal activity, we conducted site visits to each of the eight stations. (See table 1 for the Border Patrol sectors and stations we visited and the location of each station relative to the border.) While we cannot generalize the conditions we found at these Border Patrol sectors and stations to all southwest border locations, they provided us with an overall understanding of the range of operating conditions across the southwest border, as well as differences in how sectors and stations assess border security and deploy resources. In each location we observed conditions, including the use of personnel, technology, and infrastructure, and conducted semistructured interviews with Border Patrol sector and station officials. To assess trends in apprehensions, seizures, and other types of data Border Patrol uses to inform changes in the status of border security across the southwest border and in the Tucson sector, we obtained Border Patrol data for fiscal years 2006 through 2011 from DHS and Border Patrol databases—apprehensions and seizure data from the Enforcement Integrated Database (EID) and estimated cross-border illegal activity data from the Border Patrol Enforcement Tracking System (BPETS). Because of the complexity and amount of the data sets we requested, Border Patrol queried apprehension and seizure data in two groups, with different run dates. We analyzed Border Patrol apprehension and seizure data by sector for each fiscal year to obtain an overall view of cross-border illegal activity over time and the types of threats in each sector. In addition, we analyzed apprehension data to identify the number of repeat offenders (recidivism rate) and aliens from special interest countries (ASIC) apprehended across years by sector, as indicators of the extent to which deportable aliens with increased levels of associated risk were apprehended. For fiscal years 2010 and 2011, we also analyzed data showing the location of apprehensions, seizures, and apprehensions of ASICs relative to their distance from the border. We also analyzed data Border Patrol uses to assess estimated known Although illegal entries (cross-border illegal activity) within each sector. estimated known illegal entry data can be compared within a sector over time, these data cannot be compared or combined across sectors as discussed in this report. Because of the complexity and amount of data we requested, Border Patrol provided these data in two queries, with different run dates. We also interviewed relevant Border Patrol headquarters and field officials regarding the maintenance of these data, and how the agency analyzes the data to inform the status of border security. In addition, we spoke or corresponded with 13 ranchers who operated in the Tucson sector at the time of our review to discuss border security issues. We selected these ranchers based on input from various entities, including Border Patrol and select organizations that are knowledgeable about border security issues. Because this selection of ranchers was a nonprobability sample, the results from our discussions cannot be generalized to other ranchers; however, what we learned from the ranchers we contacted provided a useful perspective on the issues addressed in this report. To determine how the Tucson sector scheduled agent deployment compared with other southwest border sectors and to what extent the data showed these deployments had been effective in securing the border, we analyzed Border Patrol BPETS data regarding the scheduled deployment of agents, by sector, from fiscal years 2006 through 2011. We also analyzed to what extent agents were scheduled for deployment in “border zones”—those having international border miles—and “interior zones”— those without international border miles. Because of the complexity and amount of the data sets we requested, Border Patrol queried deployment data in two groups, with different run dates. We also interviewed Border Patrol headquarters officials in the Planning, Analysis, and Enforcement Systems Branches regarding agency guidance and practices for allocating and deploying resources—personnel, technology, and infrastructure. In addition, we conducted semistructured interviews with Border Patrol sector and station officials regarding the processes used and factors considered when determining the deployment and redeployment of resources. Further, we analyzed data from fiscal years 2006 through 2011 that Border Patrol uses to calculate overall effectiveness within sectors and to determine if the appropriate mix of assets is being deployed and used effectively and efficiently.interviewed Border Patrol headquarters and station officials regarding agency practices for collecting and recording these data and how those practices may vary across sectors. As previously discussed, because of potential inconsistencies in how the data are collected, these data cannot be compared across sectors but can be compared within a sector over time as discussed in more detail in this report. In addition, we reviewed Border Patrol guidance issued in September 2012 regarding the collection and reporting of effectiveness data. To assess to what extent Border Patrol has identified mechanisms for assessing resource needs under the 2012-2016 Border Patrol Strategic Plan (2012-2016 Strategic Plan), we analyzed key elements of the strategic plan defined by Border Patrol. To gain a better understanding of Border Patrol’s plans for developing and implementing key elements of the 2012-2016 Strategic Plan, including processes for identifying resource needs and the extent to which officials have identified interim milestones and time frames, we interviewed Border Patrol headquarters officials from the Planning and Analysis Branches, and analyzed relevant documents, such as Border Patrol planning and policy documents. We also reviewed standard practices in program management for documenting the scope of a project, including milestones or time frames for project completion and implementation. stations had identified the need for additional resources, we interviewed sector and station officials and analyzed southwest border sector operational assessments for fiscal years 2010 and 2012. We analyzed operational assessments for fiscal year 2010 because that was the last fiscal year in which DHS used operational control as a performance goal and measure, and for fiscal year 2012 because it was the most current fiscal year available at the time we conducted our analysis. For example, The Project Management Institute, The Standard for Program Management© (Newtown Square, Penn., 2006). We conducted this performance audit from June 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Information in this appendix is also presented in figure 1. Table 2 describes, for each of the nine sectors on the southwest border, the (1) number of border miles and size, in square miles; (2) type of terrain; and (3) number and type (border or interior) of stations. Figures 13 through 16 illustrate the types of terrain that can be found in four of the nine sectors. Information in this appendix is also presented in figure 2. Table 3 describes, for each of the eight stations in the Tucson sector, the (1) number of border miles and size, in square miles; (2) type of terrain; and (3) number and type (border or interior) of zones, and their distance from the border. Figures 17 through 23 illustrate the types of terrain that can be found in seven of the eight stations in the Tucson sector. Border Patrol collects and analyzes various data on the number and types of entrants who illegally cross the southwest border between the land border ports of entry, including estimates on the total number of identified—or “known”—illegal entries. Border Patrol’s estimate of known illegal entries includes the number of illegal entrants who were apprehended as well as estimates of the number of entrants who illegally crossed the border but were not apprehended (individuals who either crossed back to Mexico—turn backs—or continued traveling to the U.S. interior and who Border Patrol ceased pursuing—got aways). These data are collectively referred to as known illegal entries because Border Patrol officials have what they deem to be a reasonable indication that the cross-border activity occurred. Border Patrol uses the estimated known illegal entry data to inform tactical decision making within each of the nine southwest border sectors. Border Patrol apprehensions and estimated known illegal entries decreased significantly across all nine southwest border sectors from fiscal years 2006 through 2011, as shown in figures 24 through 32. Apprehensions decreased by 46 percent or more across all the southwest border sectors. Over this same time period, the number of estimated known illegal entries also decreased by 28 percent or more across all southwest border sectors. Apprehensions as a percentage of estimated known illegal entries increased for six sectors over this time period. Border Patrol’s 2004 Strategy recognized that both apprehensions and apprehending individuals closer to the border affect border security. Our analysis of Border Patrol data showed that apprehensions across the southwest border decreased by 69 percent from fiscal year 2006 to fiscal year 2011. Across the southwest border, from fiscal year 2010 to 2011, apprehensions within 5 miles of the border increased slightly, from 54 percent to 55 percent of total apprehensions. Apprehensions that occurred more than 20 miles from the border decreased slightly from fiscal year 2010 to 2011, from 28 percent to 26 percent across the southwest border. See figures 33 and 34 for apprehensions by southwest Border Patrol sector and distances from the border, for fiscal years 2010 and 2011. Border Patrol schedules the deployment of agents to various activities, which are categorized as either enforcement or nonenforcement. In fiscal year 2011 the percentage of agent workdays scheduled for nonenforcement activities varied by southwest border sector, from 19 percent for the Big Bend sector to 34 percent for the Yuma sector. The percentage of nonenforcement agent workdays scheduled to individual activities in fiscal year 2011 varied across sectors, as shown in figure 35, with “administration” accounting for a greater proportion of agent workdays than any other nonenforcement activities across all southwest border sectors. Border Patrol officials stated that examples of administrative activities include remote-video surveillance, public and congressional affairs duties, asset forfeiture duties, and employee support duties. Agent workdays scheduled to administration ranged from about 39 percent of all nonenforcement agent workdays in the Rio Grande Valley sector to almost 65 percent in the Laredo sector. Within the Tucson sector–our focus sector–training, intelligence support, and agent nonenforcement duties (defined to include duties such as brush removal; facility, fence, and vehicle maintenance; and video surveillance system operations) each accounted for a greater proportion of agent workdays than any other nonenforcement activity after administration. The percentage of agent workdays scheduled to these activities in other sectors varied, as shown in figure 35. “Other nonenforcement activities” includes duties such as litigation, camera operations, and public relations. Figures 36 through 44 show the number of apprehensions, turn backs, and got aways as percentages of total estimated known illegal entries for each southwest border sector, from fiscal years 2006 through 2011. Border Patrol sectors rely on a different mix of cameras, sign cutting, credible source, and visual observation to identify and report the number of turn backs and got aways used to determine the number of estimated known illegal entries across locations. Figure 45 shows the breakdown by source of data that sectors used to estimate got aways and turn backs in fiscal year 2011. In addition to the contact named above, Lacinda Ayers (Assistant Director); Joshua S. Akery; Frances A. Cook; Barbara A. Guffy; Eric D. Hauswirth; Stanley J. Kostyla; Brian J. Lipman; John W. Mingus, Jr.; Jessica S. Orr; Susan A. Sachs; and Jerome T. Sandau made key contributions to this report. | Within DHS, U.S. Customs and Border Protection's (CBP) Border Patrol has primary responsibility for securing the border between ports of entry, and reported that with its 18,500 agents it apprehended over 327,000 illegal entrants at the southwest border in fiscal year 2011. Across Border Patrol's nine southwest border sectors, most apprehensions occurred in the Tucson sector in Arizona. GAO was asked to review how Border Patrol manages resources at the southwest border. This report examines (1) apprehension and other data Border Patrol collects to inform changes in border security for the southwest border and the Tucson sector, in particular; (2) how the Tucson sector compares with other sectors in scheduling agent deployment and to what extent data show that deployments have been effective; and (3) the extent to which Border Patrol has identified mechanisms to assess resource needs under its new strategic plan. GAO analyzed DHS documents and data from fiscal years 2006 to 2011, and interviewed officials in headquarters and five southwest border sectors selected based on cross-border illegal activity, among other things. Results cannot be generalized across the southwest border, but provided insights into Border Patrol operations. In fiscal year 2011, the Department of Homeland Security (DHS) reported data meeting its goal to secure the land border with a decrease in apprehensions; our data analysis showed that apprehensions decreased within each southwest border sector and by 68 percent in the Tucson sector from fiscal years 2006 to 2011, due in part to changes in the U.S. economy and achievement of Border Patrol strategic objectives. These data generally mirrored the decrease in estimated known illegal entries across locations. Other data are used by Border Patrol sector management to assess efforts in securing the border against the threat of illegal migration, drug smuggling, and terrorism; and Border Patrol may use these data to assess border security at the national level as the agency transitions to a new strategic plan. Our analysis of these data indicated that in the Tucson sector, there was little change in the percentage of estimated known illegal entrants apprehended by Border Patrol over the past 5 fiscal years, and the percentage of individuals apprehended who repeatedly crossed the border illegally declined across the southwest border by 6 percent from fiscal years 2008 to 2011. Additionally, the number of drug seizures increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011, and apprehensions of aliens from countries determined to be at an increased risk of sponsoring terrorism increased from 239 in fiscal year 2006 to 309 in fiscal year 2010, but decreased to 253 in fiscal year 2011. The Tucson sector scheduled more agent workdays in fiscal year 2011 for enforcement activities related to patrolling the border than other sectors; however, data limitations preclude comparison of overall effectiveness in how each sector has deployed resources to secure the border. In fiscal year 2011 the Tucson sector scheduled 73 percent of agent workdays for enforcement activities, and of these activities, 71 percent were scheduled for patrolling within 25 miles of the border. Other sectors scheduled from 44 to 70 percent of agent enforcement workdays for patrolling the border. Border Patrol sectors assess how effectively they use resources to secure the border, but differences in how sectors collect and report the data preclude comparing results. Border Patrol issued guidance in September 2012 to improve the consistency of sector data collection and reporting, which may allow future comparison of performance. Border Patrol is developing key elements of its 2012-2016 Strategic Plan needed to define border security and the resources necessary to achieve it, but has not identified milestones and time frames for developing and implementing performance goals and measures in accordance with standard practices in program management. Border Patrol officials stated that performance goals and measures are in development for assessing the progress of agency efforts to secure the border between the ports of entry, and since fiscal year 2011, DHS has used the number of apprehensions on the southwest border as an interim goal and measure. However, as GAO previously testified, this interim measure does not inform program results and therefore limits DHS and congressional oversight and accountability. Milestones and time frames could assist Border Patrol in monitoring progress in developing goals and measures necessary to assess the status of border security and the extent to which existing resources and capabilities are appropriate and sufficient. Border Patrol expects to implement other key elements of its strategic plan over the next 2 fiscal years. GAO recommends that CBP ensure Border Patrol develops milestones and time frames for developing border security goals and measures to assess progress made and resource needs. DHS concurred with these recommendations. |
In the absence of generally accepted standards, individual states decide how they will do market analysis and perform market conduct examinations. While all states do market analysis in some form, few have established formal programs that look at companies in a consistent and routine manner. States also have no generally agreed upon standards for how many examinations to perform, which companies to examine and how often, and what the scope of the examination should be. As a result of the lack of common standards for market analysis and the lack of consistency in the application of the guidelines for examinations, states find it difficult to depend on other states’ oversight of companies’ market behavior. NAIC and some states have a growing awareness that better market analysis can be a significant tool for monitoring the marketplace behavior of insurance companies and deciding which insurers to examine. All states perform some type of market analysis. In many states, however, it consists largely of monitoring complaints and complaint trends; and reacting to significant issues that arise. Three states that we visited—Missouri, Ohio, and Oregon—have established a proactive market analysis program. These programs for market analysis have established processes for monitoring company behavior to identify trends, companies that vary from the norm (outliers), and potential market conduct problems. In general, an established program would have dedicated staff and protocols for gathering data and conducting analysis at the department offices. Each of the three states with an analysis process that we visited approached market analysis in a different way. Ohio’s program consisted of special data calls to obtain extensive information from selected company files, and using computerized audit tools to analyze specific aspects of companies’ operations relative to norms identified by peer analysis and to state law. For example, Ohio did 184 “desk audits” in 2001 using data requested from companies doing business in the state. Missouri relied on routinely collecting market data from all licensed companies. Missouri has developed a market data report that companies submit as a supplement to their annual financial reports. This data is then used to evaluate market trends and conditions, as well as to identify individual companies that were outliers. Oregon’s newly established program involved maintaining files on companies in which all available data was collected to facilitate a broad and ongoing review of company behavior. Both Ohio and Oregon told us that their market analysis programs were still in an experimental stage of development. When properly done, market analysis can allow states to focus attention on the high-risk companies rather than selecting companies for examination based primarily on criteria such as market share, which does not directly correlate to market behavior problems. Missouri officials added that market analysis is not a substitute for market conduct examinations but should interact and be integrated with the examination process. Each state has between 900 and 2,000 licensed insurance companies. Because in general states do not currently depend upon other states’ regulation of companies’ market behavior, most states feel a responsibility for overseeing all the companies selling in their state. The impossibility of examining so many companies requires regulators to identify and prioritize which companies they will examine. The states we visited used a variety of factors to choose companies for a market conduct examination. The most commonly used factors for choosing from among the companies deemed eligible for a market conduct examination were complaints, market share, and time since the last examination. Some states chose to do market conduct exams for only a subset of licensed companies, even though other companies could comprise a majority of the insurers selling in the state. For example, of the states we visited, Arkansas focused primarily on domestic companies—that is, on companies chartered in their state. In Arkansas, 245 of 1,668 licensed companies in 2001 were domestic. As a consequence, 1,423 non-domestic companies, or 85 percent of all the companies licensed in Arkansas in 2001, were not examined in Arkansas in spite of the fact that they may or may not have been examined by some other state. All the states we visited limited the scope of their examinations to customers from within their particular state. That is, examiners looked only at files of state residents. Moreover, most states further limited the scope of their examinations by focusing on only one or a few of a company’s area of operations. While some states still do comprehensive market conduct examinations, the trend is to conduct targeted examinations of limited scope and in a specific area of concern. State officials we interviewed indicated that targeted examinations are being used more often because these examinations do not take as long as comprehensive examinations, allowing states to conduct more. Of the 9 states we visited, Arkansas, Missouri, and New Mexico continued to conduct some comprehensive examinations as well as targeted examinations. Arkansas officials told us that they believed comprehensive examinations were important because such examinations provided the greatest assurance that companies were complying with insurance laws and regulations. According to NAIC, 49 states and the District of Columbia reported performing some market conduct activities in 2001. Of these, 15 completed only targeted examinations, 4 did only comprehensive examinations, and 22 completed some of both types of examination. The remaining nine did not complete any market conduct examinations in 2001. The requirements for and level of training for examiners also varied widely among the states. Each of the states we visited provided some type of training for their examiners. However, there are no generally accepted standards for what constitutes adequate training for a market conduct examiner across the states. Several levels of certifications for market conduct examiners are available, but only 2 of the states we visited, Oregon and New Mexico, required their examiners to certify or become certified in a specified period. As can be seen in table 1, there is considerable variation in the number of examinations completed in 2001 by the states we visited. Variation in the number of examinations consistent with the size of the insurance market would be expected. However, as shown in the table, the number of examinations completed bore little relationship to the size of the insurance market in each state. This comparison should not necessarily be taken as an indicator of the relative regulatory performance of the nine states we visited, because during another year the ranking of the states could be different. However, together with the variations in how states select companies for examinations and how they do them, this added variability helps further explain why the states may be reluctant to depend on other states to examine companies selling insurance to their citizens. plans, efforts, and results could improve regulation and, at the same time, reduce the regulatory burden on companies. Many insurance companies, particularly the largest ones, report that they undergo frequent, sometimes simultaneous, market conduct examinations. We asked 40 of the largest national insurance companies to provide information about their market conduct examination experience for the years 1999 to 2001. Of the 25 companies that responded, 19 were examined a total of 130 times by multiple insurance regulators during the 3-year period. Six were examined once or twice during the period, and just over half the responding companies were examined between one and five times. However, three companies were each examined 17 or more times during the 3 years, with one company receiving 20 examinations—an average of seven nearly every year. These results appear to be consistent with concerns expressed by the insurance industry about excessively frequent and possibly duplicative market conduct examinations. One of the most common complaints from the 25 insurers that responded to our questionnaire was that states did not coordinate their examinations with other states. Some companies reported that, on occasion, multiple states had conducted on-site examinations at the same time. The companies told us that such examinations create difficulties for them and limited the resources they had available to assist the examiners. For example, one insurer wrote, “It takes an insurer a tremendous amount of effort to prepare for and deal with individual state insurance department’s exams (every one is different, plus states generally do not accept others exams in place of another similar exam being done). The duplication of effort is wasteful by the states.” In contrast, six companies, or nearly one-quarter of those responding, had not been examined by any state during the period. Of these six companies, two were last examined in 1997 and the other four did not report having any market conduct examinations. These companies—like others that reported—are large multi-state insurance companies. Since in many states a primary criterion for selecting a company for examination is market share, these responses suggest that the proportion of medium-size and small insurers that rarely, if ever, receive a market conduct examination may be much higher. Groups of states, as well as NAIC, have taken actions to improve the coordination and efficiency of the market conduct examination process. One effort involves improving the sharing of examination information by providing notice of upcoming examinations and sharing results through NAIC’s Examination Tracking System. However, the Examination Tracking System is incomplete and often ignored by the state regulators, in part, because it has been inconvenient and difficult to use for scheduling and reporting the results of market conduct examinations. As a result, states are not fully utilizing the system. NAIC’s survey of states’ use of the Examination Tracking System concluded that no more than 66 percent of the states, or 36 states, consistently reported their market conduct or combined market conduct/financial examination schedules to NAIC. Moreover, only 31 percent of the states reported back to NAIC when the examination had been completed. Another avenue of coordination being pursued by NAIC and some states is joint, or collaborative, examinations. Based on our review of nine states and of NAIC information, some states do conduct collaborative examinations. For example, Ohio officials told us that they had started to conduct collaborative examinations with Illinois, Nebraska, and Oregon. Indiana officials indicated that they had recently completed an examination of a large insurer jointly with another state. Such efforts, however, have not been consistent among states, nor is there a policy or standard procedure about when or how such examinations should occur. Furthermore, while collaborative examinations could reduce the total number of duplicative exams and may result in somewhat more efficient use of regulatory resources, they still require that each state send examiners into the company. In effect, collaborative examinations are a way for multiple states to do a market conduct examination of a company at the same time. Such an examination may be to the benefit of the company. However, if each state’s examiners still ask for samples of files for only their own state’s insurance consumers, the benefit may be reduced. The NAIC identified the need for uniformity in market conduct regulation as early as the 1970s. Since then NAIC has launched a number of market conduct efforts intended to identify and address the issues and concerns caused by the lack of uniformity in states’ market conduct examination processes, and more recently in the market analysis area. Although progress has been slow in establishing more uniformity in market conduct regulation, NAIC has had some successes. One of the earliest was the development of the market conduct examination handbook containing guidance on conducting and reporting examination results. In general, most states use the handbook as an examination guide, but they can still choose not to follow the handbook in an examination or to modify it. For example, although the handbook lays out the steps for conducting an exam, such as notice of an exam, use of sampling techniques, and preparation of an examination report, each state can go about those steps differently. Moreover, the handbook in not intended to cover some aspects of examinations, including examination frequency and company selection criteria. One challenge to establishing voluntary uniform national standards for examinations and examination processes is that states are free to adopt the NAIC’s model laws, regulations, and procedures; to modify them to meet their perceived needs and conditions; or even to ignore them entirely. Once NAIC as an organization agrees on recommendations that would create more uniform regulatory statutes, two additional challenges to uniformity remain. First, when proposed changes affect state law, state legislatures must approve the recommendations without significant changes. Second, each state insurance department must successfully implement the recommendations. These challenges to establishing voluntary uniform national standards for examinations can clearly be seen in the number of states adopting the model laws and regulation that NAIC identified in 1995 as the essential elements for a market conduct examination program. By 2003, only nine models had been adopted by more than half the states, while two models had been adopted by five or fewer states. Achieving uniformity in market regulation will be a difficult process for NAIC and the states. However, a similar problem that existed in solvency regulation over a decade ago was solved by creating the Financial Regulation Standards and Accreditation Program. The program’s overall goal was to achieve a consistent, state-based system of solvency regulation throughout the country. The program was designed to make monitoring and regulating the solvency of multistate insurance companies more consistent by ensuring that states adopt and adhere to agreed-upon standards, which establish the basic recommended practices for an effective regulatory department. To be accredited, states had to show that they had adopted specific solvency laws and regulations that protected insurance consumers, established defined financial analysis and examination processes, and used appropriate organizational and personnel practices. While the quality of regulation is still not consistent, the Accreditation Program has improved financial regulation across the states. As a result, states are now willing, in most cases, to depend on the solvency regulation of other states. While the process used by state insurance regulators to oversee solvency could provide a model for oversight of market conduct as well, there are structural differences in market regulation that will undoubtedly affect the ultimate design of an improved market conduct oversight system. These differences will have to be addressed by NAIC and the states in order to move forward. First, market conduct oversight involves many different activities and operations of insurance companies. This fact has broad implications for regulatory consistency and mutual dependence, including requirements for the necessary training of market conduct examiners and analysts. Second, regulators told us that life insurers tend to use a company-wide business plan and organizational structure. That is, a life company’s operations tend to be relatively consistent across the entire company. Property-casualty insurers, on the other hand, tend to use a regional business model and organizational structure. As a result, a property- casualty insurer’s operations could differ, perhaps substantially, from region to region. Clearly, the life insurer model is more directly amenable to domiciliary-state oversight than the property-casualty model. However, any regional or state-by-state variances in a company’s operations and procedures would reduce the effectiveness of domiciliary-state oversight. Some aspects of market conduct oversight will always be state (or region) specific because of the differences between life and property-casualty insurers, but also because there will always be differences between some of the specific laws and requirements of individual states. As a result, even when greater uniformity of regulatory oversight is achieved, it is likely that states will always have to devote some attention to the activities of insurers not domiciled in their state. Nevertheless, if a state insurance department knew that the domiciliary state was doing consistent market oversight on the company with agreed-upon processes, appropriate scope, and well-trained examiners and analysts, the level of attention needed, even for a property-casualty company, could be substantially lessened. Finally, even to the extent that properly designed and competently performed market conduct oversight can effectively monitor and regulate insurance company practices, it will extend to the sales practices of insurance agents only to the extent that the company takes responsibility for and exercises control of the behavior of the agents that sell its products. In the current environment of market regulation, most insurance regulators believe they need to oversee the market behavior of all companies selling insurance in their state because they cannot depend on the oversight of the other states. State regulators think this way in part because important elements of market regulation are characterized by a lack of even the most fundamental consistency. Formal and rigorous market analysis is in its infancy among state regulators, and whether, when, and how states do market conduct examinations vary widely. As a result, state regulators are now using the resources that they have in the area of market analysis and examinations inefficiently. Regulators from different states examine some insurers often, while other insurers are examined infrequently or not at all. More importantly, because market analysis is weak, regulators may not be finding and focusing on the companies that most need to have an examination. We support the goal of increasing the effectiveness of market conduct regulation through the development and implementation of consistent, nationwide standards for market analysis and market conduct examinations across the states in order to better protect insurance consumers. The emphasis placed on these issues by NAIC has increased substantially over the last 3 years. We believe that NAIC has taken a first step in the right direction. Much work, however, remains, as NAIC and the states have not yet identified or reached agreement on appropriate laws, regulations, processes, and resource requirements that will support the goal of an effective, uniform market oversight program. Such a program, consisting of strong market analysis and effective market conduct examinations, will facilitate the development of an atmosphere of increasing trust among the states. However, at present it remains uncertain whether the NAIC and the states can agree on and implement a program that will accomplish this goal. Madam Chairwoman, this concludes my statement. I would be pleased to answer any questions you or other members of the subcommittee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony provides information on two important tools state insurance regulators use to oversee the market activities of insurance companies--market analysis and market conduct examinations. Market analysis is generallly done in the state insurance departments. It consists of gathering and integrating information about insurance companies' operations in order to monitor market behavior and identify potential problems at an early stage. Market conduct examinations, which are generally done on site, are a review of an insurer's marketplace practices. The examination is an opportunity to verify data provided to the department by the insurer and to confirm that companies' internal controls and operational processes result in compliance with state laws and regulations. Specifically, this testimony focuses on (1) the states' use of market analysis and examinations in market regulation, and (2) the effectiveness of the National Association of Insurance Commissioners' (NAIC) efforts to improve these oversight tools and encourage the states to use them. We found that while all states do some level of market analysis, few states have established formal market analysis programs to maintain a systematic and rigorous overview of companies' market behavior and to more effectively identify problem companies for more detailed review. The way state insurance regulators approach and perform market conduct examinations also varied widely across the states. While NAIC has developed a handbook for market conduct examiners, states are not required to use it, and we found that it is not consistently applied across states. Moreover, the handbook is not intended to provide guidance for some important aspects of market conduct examinations--for example, how often examinations should be performed or what criteria states should use to select companies to examine. We also found that the number of market conduct examiners differed widely among states and that there were no generally accepted standards for training and certifying examiners. These differences make it difficult for states to depend on other states' oversight of market activities. Most of the states that we visited told us that they felt responsible for regulating the behavior of all companies that sold insurance in their state. With anywhere from 900 to 2,000 companies operating within each state, the pool of companies is simply too large for any one insurance department to handle. Attempts to do so are neither efficient nor effective. Moreover, since many states do not coordinate their examinations with other states, some large multistate insurance companies reported being examined by multiple states, while other companies were examined infrequently or never. We also found that since the mid 1970s, NAIC has taken a variety of steps to improve the consistency and quality of market conduct examinations. However, despite the NAIC's long-standing efforts and some limited successes, progress toward a more effective process has been slow. Recently, NAIC has increased the emphasis it places on market analysis and market conduct examinations as regulatory tools that could improve states' ability to oversee market conduct. With more consistent implementation of routine market analysis, states should be better able to use the resources they already have available to target companies requiring immediate attention. Also, by consistently applying common standards for market conduct examinations, states should be able to rely on regulators in other states for assessments of an insurance company's operations. These improvements should in turn increase the efficiency of the examination process and improve consumer protection by reducing existing overlaps and gaps in regulatory oversight. However, if NAIC cannot convince the various states to adopt and implement common standards for market analysis and examinations, current efforts to strengthen these consumer protection tools are unlikely to result in any fundamental improvement. While we focus on the states' use of market analysis and market conduct examinations, market regulation includes several other important regulatory tools, including complaint handling and investigation, policy rate and form review, agent and company licensing, and consumer education. Most states have functioning programs addressing each of these four regulatory areas. Ideally, all regulatory tools, including market analysis and market conduct examinations, should work together in an integrated and interrelated way. |
The performance of passenger and checked baggage screeners in detecting threat objects at the nation’s airports has been a long-standing concern. In 1978, screeners failed to detect 13 percent of the potentially dangerous objects that Federal Aviation Administration (FAA) agents carried through airport screening checkpoints during tests. In 1987, screeners did not detect 20 percent of the objects in similar tests. In tests conducted during the late 1990s, as the testing objects became more realistic, screeners’ abilities to detect dangerous objects declined further. In April 2004, we, along with the DHS Office of the Inspector General (OIG), testified that the performance of screeners continued to be a concern. More recent tests conducted by TSA’s Office of Internal Affairs and Program Review (OIAPR) also identified weaknesses in the ability of screeners to detect threat objects, and separate DHS OIG tests identified comparable screener performance weaknesses. In its July 2004 report, The National Commission on Terrorist Attacks Upon the United States, known widely as the 9/11 Commission, also identified the need to improve screener performance and to better understand the reasons for performance problems. After the terrorist attacks of September 11, 2001, the President signed the Aviation and Transportation Security Act (ATSA) into law on November 19, 2001, with the primary goal of strengthening the security of the nation’s aviation system. ATSA created TSA as an agency with responsibility for securing all modes of transportation, including aviation. As part of this responsibility, TSA oversees security operations at the nation’s more than 450 commercial airports, including passenger and checked baggage screening operations. Prior to the passage of ATSA, air carriers were responsible for screening passengers and checked baggage, and most used private security firms to perform this function. FAA was responsible for ensuring compliance with screening regulations. Today, TSA security activities at airports are overseen by FSDs. Each FSD is responsible for overseeing security activities, including passenger and checked baggage screening, at one or more commercial airports. TSA classifies the over 450 commercial airports in the United States into one of five security risk categories (X, I, II, III, and IV) based on various factors, such as the total number of takeoffs and landings annually, the extent to which passengers are screened at the airport, and other special security considerations. In general, category X airports have the largest number of passenger boardings and category IV airports have the smallest. TSA periodically reviews airports in each category and, if appropriate, updates airport categorizations to reflect current operations. Figure 1 shows the number of commercial airports by airport security category as of December 2003. In addition to establishing TSA and giving it responsibility for passenger and checked baggage screening operations, ATSA set forth specific enhancements to screening operations for TSA to implement, with deadlines for completing many of them. These requirements included assuming responsibility for screeners and screening operations at more than 450 commercial airports by November 19, 2002; establishing a basic screener training program composed of a minimum of 40 hours of classroom instruction and 60 hours of on-the-job training; conducting an annual proficiency review of all screeners; conducting operational testing of screeners; requiring remedial training for any screener who fails an operational test; and screening all checked baggage for explosives using explosives detection systems by December 31, 2002. Passenger screening is a process by which authorized TSA personnel inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item aboard an aircraft or into a sterile area. Passenger screeners must inspect individuals for prohibited items at designated screening locations. The four passenger screening functions are: X-ray screening of property, walk-through metal detector screening of individuals, hand-wand or pat-down screening of individuals, and physical search of property and trace detection for explosives. Checked baggage screening is a process by which authorized security screening personnel inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is accomplished through the use of explosive detection systems (EDS) or explosive trace detection (ETD) systems, and through the use of alternative means, such as manual searches, K-9 teams, and positive passenger bag match, when EDS and ETD systems are unavailable on a temporary basis. Figure 2 provides an illustration of passenger and checked baggage screening operations. There are several positions within TSA for employees that perform and directly supervise passenger and checked baggage screening functions. Figure 3 provides a description of these positions. To prepare screeners to perform screening functions, to keep their skills current, and to address performance deficiencies, TSA provides three categories of required screener training. Table 1 provides a description of the required training. In September 2003, we reported on our preliminary observations of TSA’s efforts to ensure that screeners were effectively trained and supervised and to measure screener performance. We found that TSA had established and deployed a basic screener training program and required remedial training but had not fully developed or deployed a recurrent training program for screeners or supervisors. We also reported that TSA had collected limited data to measure screener performance. Specifically, TSA had conducted limited covert testing, the Threat Image Projection System was not fully operational, and TSA had not implemented the annual screener proficiency testing required by ATSA. In subsequent products, we reported progress TSA had made in these areas and challenges TSA continued to face in making training available to screeners and in measuring and enhancing screener performance. A summary of our specific findings is included in appendix I. TSA has taken a number of actions to enhance the training of screeners and Screening Supervisors but has encountered difficulties in providing access to recurrent training. TSA has enhanced basic training by, among other things, adding a dual-function (passenger and checked baggage) screening course for new employees. Furthermore, in response to the need for frequent and ongoing training, TSA has implemented an Online Learning Center with self-guided training courses available to employees over TSA’s intranet and the Internet and developed and deployed a number of hands-on training tools. Moreover, TSA now requires screeners to participate in 3 hours of recurrent training per week, averaged over each quarter year. TSA has also implemented leadership and technical training programs for Screening Supervisors. However, some FSDs, in response to open-ended survey questions, identified a desire for more training in specific areas, including leadership, communication, and supervision. Further, despite the progress TSA has made in enhancing and expanding screener and supervisory training, TSA has faced challenges in providing access to recurrent training. FSDs reported that insufficient staffing and a lack of high-speed Internet/intranet connectivity at some training facilities have made it difficult to fully utilize these programs and to meet training requirements. TSA has acknowledged that challenges exist in recurrent screener training delivery and is taking steps to address these challenges, including factoring training requirements into workforce planning efforts and distributing training through written materials and CD-ROMs until full Internet/intranet connectivity is achieved. However, TSA does not have a plan for prioritizing and scheduling the deployment of high-speed connectivity to all airport training facilities once funding is available. The absence of such a plan limits TSA’s ability to make prudent decisions about how to move forward with deploying connectivity to all airports to provide screeners access to online training. TSA has enhanced its basic screener training program by updating the training to reflect changes to standard operating procedures, deploying a new dual-function (passenger and checked baggage screening) basic training curriculum, and allowing the option of training delivery by local staff. As required by ATSA, TSA established a basic training program for screeners composed of a minimum of 40 hours of classroom instruction and 60 hours of on-the-job training. TSA also updated the initial basic screener training courses at the end of 2003 to incorporate changes to standard operating procedures and directives, which contain detailed information on how to perform TSA-approved screening methods. However, a recent study by the DHS OIG found that while incorporating the standard operating procedures into the curricula was a positive step, a number of screener job tasks were incompletely addressed in or were absent from the basic training courses. In addition to updates to the training curriculum, in April 2004, TSA developed and implemented a new basic screener training program, dual- function screener training that covers the technical aspects of both passenger and checked baggage screening. Initially, new hire basic training was performed by a contractor and provided a screener with training in either passenger or checked baggage screening functions. A screener could then receive basic training in the other function later, at the discretion of the FSD, but could not be trained in both functions immediately upon hire. The new dual-training program is modular in design. Thus, FSDs can chose whether newly hired screeners will receive instruction in one or both of the screening functions during the initial training. In addition, the individual modules can also be used to provide recurrent training, such as refreshing checked baggage screening skills for a screener who has worked predominately as a passenger screener. TSA officials stated that this new approach provides the optimum training solution based on the specific needs of each airport and reflects the fact that at some airports the FSD does not require all screeners to be fully trained in both passenger and checked baggage screening functions. Some FSDs, particularly those at smaller airports, have made use of the flexibility offered by the modular design of the new course to train screeners immediately upon hire in both passenger and checked baggage screening functions. Such training up front allows FSDs to use screeners for either the passenger or the checked baggage screening function, immediately upon completion of basic training. Figure 4 shows that 58 percent (3,324) of newly hired screeners trained between April 1, 2004, and September 1, 2004, had completed the dual-function training. In April 2004, TSA also provided FSDs with the flexibility to deliver basic screener training using local instructors. TSA’s Workforce Performance and Training Office developed basic screener training internally, and initially, contractors delivered all of the basic training. Since then, TSA has provided FSDs with the discretion to provide the training using local TSA employees or to use contractors. The flexibility to use local employees allows FSDs and members of the screener workforce to leverage their first-hand screening knowledge and experience and address situations unique to individual airports. As of December 10, 2004, TSA had trained 1,021 local FSD staff (representing 218 airports) in how to instruct the dual-function screener training course. TSA officials stated that they expect the use of TSA-approved instructors to increase over time. “Numerous interviews revealed concerns with training curriculum, communication, and coordination issues that directly affect security screening. Unsatisfied with the quantity and breadth of topics, many Training Coordinators have developed supplementary lectures on both security and non-security related topics. These additional lectures…have been very highly received by screeners.” In October 2003, TSA introduced the Online Learning Center to provide screeners with remote access to self-guided training courses. As of September 14, 2004, TSA had provided access to over 550 training courses via the Online Learning Center and made the system available via the Internet and its intranet. TSA also developed and deployed a number of hands-on training modules and associated training tools for screeners at airports nationwide. These training modules cover topics including hand- wanding and pat-down techniques, physical bag searches, X-ray images, prohibited items, and customer service. Additionally, TSA instituted another module for the Online Learning Center called Threat in the Spotlight, that, based on intelligence TSA receives, provides screeners with the latest in threat information regarding terrorist attempts to get threat objects past screening checkpoints. Appendix III provides a summary of the recurrent training tools TSA has deployed to airports and the modules currently under development. In December 2003, TSA issued a directive requiring screeners to receive 3 hours of recurrent training per week averaged over a quarter year. One hour is required to be devoted to X-ray image interpretation and the other 2 hours to screening techniques, review of standard operating procedures, or other mandatory administrative training, such as ethics and privacy act training. In January 2004, TSA provided FSDs with additional tools to facilitate and enhance screener training. Specifically, TSA provided airports with at least one modular bomb set (MBS II) kit—containing components of an improvised explosive device—and one weapons training kit, in part because screeners had consistently told TSA’s OIAPR inspectors that they would like more training with objects similar to ones used in covert testing. Although TSA has made progress with the implementation of recurrent training, some FSDs identified the need for several additional courses, including courses that address more realistic threats. TSA acknowledged that additional screener training is needed, and officials stated that the agency is in the process of developing new and improved screener training, including additional recurrent training modules (see app. III). TSA has arranged for leadership training for screening supervisors through the Department of Agriculture Graduate School and has developed leadership and technical training courses for screening supervisors. However, some FSDs reported the need for more training for Screening Supervisors and Lead Screeners. The quality of Screening Supervisors has been a long-standing concern. In testifying before the 9/11 Commission in May 2003, a former FAA Assistant Administrator for Civil Aviation Security stated that following a series of covert tests at screening checkpoints to determine which were strongest, which were weakest, and why, invariably the checkpoint seemed to be as strong or as weak as the supervisor who was running it. Similarly, TSA’s OIAPR identified a lack of supervisory training as a cause for screener covert testing failures. Further, in a July 2003 internal study of screener performance, TSA identified poor supervision at the screening checkpoints as a cause for screener performance problems. In particular, TSA acknowledged that many Lead Screeners, Screening Supervisors, and Screening Managers did not demonstrate supervisory and management skills (i.e., mentoring, coaching, and positive reinforcement) and provided little or no timely feedback to guide and improve screener performance. In addition, the internal study found that because of poor supervision at the checkpoint, supervisors or peers were not correcting incorrect procedures, optimal performance received little reinforcement, and not enough breaks were provided to screeners. A September 2004 report by the DHS OIG supported these findings, noting that Screening Supervisors and Screening Managers needed to be more attentive in identifying and correcting improper or inadequate screener performance. TSA recognizes the importance of Screener Supervisors and has established training programs to enhance their performance and effectiveness. In September 2003, we reported that TSA had begun working with the Department of Agriculture Graduate School to tailor the school’s off-the-shelf supervisory course to meet the specific needs of Screening Supervisors, and had started training the existing supervisors at that time through this course until the customized course was fielded. According to TSA’s training records, as of September 2004, about 3,800 Screening Supervisors had completed the course—approximately 92 percent of current Screening Supervisors. In response to our survey, one FSD noted that the supervisory training was long overdue because most of the supervisors had no prior federal service or, in some cases, no leadership experience. This FSD also noted that “leadership and supervisory skills should be continuously honed; thus, the development of our supervisors should be an extended and sequential program with numerous opportunities to develop skills—not just a one-time class.” In addition to the Department of Agriculture Graduate School course, TSA’s Online Learning Center includes over 60 supervisory courses designed to develop leadership and coaching skills. In April 2004, TSA included in the Online Learning Center a Web-based technical training course—required for all Lead Screeners and Screening Supervisors. This course covers technical issues, such as resolving alarms at screening checkpoints. TSA introduced this course to the field in March 2004, and although the course is a requirement, TSA officials stated that they have not set goals for when all Lead Screeners and Screening Supervisors should have completed the course. In June 2004, TSA training officials stated that a second supervisor technical course was planned for development and introduction later in 2004. However, in December 2004, the training officials stated that planned funding for supervisory training may be used to support other TSA initiatives. The officials acknowledged that this would reduce TSA’s ability to provide the desired type and level of supervisory training to its Lead Screener, Screening Supervisor, and Screening Manager staff. TSA plans to revise its plans to provide Lead Screener, Screening Supervisor, and Screening Manager training based on funding availability. Although TSA has developed leadership and technical courses for Screening Supervisors, many FSDs, in response to our general survey, identified additional types of training needed to enhance screener supervision. Table 2 provides a summary of the additional training needs that FSDs reported. TSA training officials stated that the Online Learning Center provides several courses that cover these topics. Such courses include Situation Leadership II; Communicating with Difficult People: Handling Difficult Co-Workers; Team Participation: Resolving Conflict in Teams; Employee Performance: Resolving Conflict; High Impact Hiring; Team Conflict: Overcoming Conflict with Communication; Correcting Performance Problems: Disciplining Employees; Team Conflict: Working in Diversified Teams; Correcting Performance Problems: Identifying Performance Problems; Resolving Interpersonal Skills; Grammar, Skills, Punctuation, Mechanics and Word Usage; and Crisis in Organizations: Managing Crisis Situations. TSA training officials acknowledged that for various reasons FSDs might not be aware that the supervisory and leadership training is available. For example, FSDs at airports without high-speed Internet/intranet access to the Online Learning Center might not have access to all of these courses. It is also possible that certain FSDs have not fully browsed the contents of the Online Learning Center and therefore are not aware that the training is available. Furthermore, officials stated that online learning is relatively new to government and senior field managers, and some of the FSDs may expect traditional instructor-led classes rather than online software. Some FSDs responded to our general survey that they faced challenges with screeners receiving recurrent training, including insufficient staffing to allow all screeners to complete training within normal duty hours and a lack of high-speed Internet/intranet connectivity at some training facilities. According to our guide for assessing training, to foster an environment conducive to effective training and development, agencies must take actions to provide sufficient time, space, and equipment to employees to complete required training. TSA has set a requirement for 3 hours of recurrent training per week averaged over a quarter year, for both full-time and part-time screeners. However, FSDs for about 18 percent (48 of 263) of the airports in our airport-specific survey reported that screeners received less than 9 to 12 hours of recurrent training per month. Additionally, FSDs for 48 percent (125 of 263) of the airports in the survey reported that there was not sufficient time for screeners to receive recurrent training within regular work hours. At 66 percent of those airports where the FSD reported that there was not sufficient time for screeners to receive recurrent training within regular work hours, the FSDs cited screener staffing shortages as the primary reason. We reported in February 2004 that FSDs at 11 of the 15 category X airports we visited reported that they were below their authorized staffing levels because of attrition and difficulties in hiring new staff. In addition, three of these FSDs noted that they had never been successful in hiring up to the authorized staffing levels. We also reported in February 2004 that FSDs stated that because of staffing shortages, they were unable to let screeners participate in training because it affected the FSD’s ability to provide adequate coverage at the checkpoints. In response to our survey, FSDs across all categories of airports reported that screeners must work overtime in order to participate in training. A September 2004 DHS OIG report recommended that TSA examine the workforce implications of the 3-hour training requirement and take steps to correct identified imbalances in future workforce planning to ensure that all screeners are able to meet the recurrent training standard. The 3-hours-per-week training standard represents a staff time commitment of 7.5 percent of full- time and between 9 and 15 percent of part-time screeners’ nonovertime working hours. TSA headquarters officials have stated that because the 3- hours-per-week requirement is averaged over a quarter, it provides flexibility to account for the operational constraints that exist at airports. However, TSA headquarters officials acknowledged that many airports are facing challenges in meeting the 3-hour recurrent training requirement. TSA data for the fourth quarter of fiscal year 2004 reported that 75 percent of airports were averaging less than 3 hours of recurrent training per week per screener. The current screener staffing model, which is used to determine the screener staffing allocations for each airport, does not take the 3-hours-per-week recurrent training requirement into account. However, TSA headquarters officials said that they are factoring this training requirement into their workforce planning efforts, including the staffing model currently under development. Another barrier to providing recurrent training is the lack of high-speed Internet/intranet access at some of TSA’s training locations. TSA officials acknowledged that many of the features of the Online Learning Center, including some portions of the training modules and some Online Learning Center course offerings, are difficult or impossible to use in the absence of high-speed Internet/intranet connectivity. As one FSD put it, “the delayed deployment of the high-speed Internet package limits the connectivity to TSA HQ for various online programs that are mandated for passenger screening operations including screener training.” One FSD for a category IV airport noted the lack of a high-speed connection for the one computer at an airport he oversees made the Online Learning Center “nearly useless.” TSA began deploying high-speed access to its training sites and checkpoints in May 2003 and has identified high-speed connectivity as necessary in order to deliver continuous training to screeners. TSA’s July 2003 Performance Improvement Study recommended accelerating high- speed Internet/intranet access in order to provide quick and systematic distribution of information and, thus, reduce uncertainty caused by the day-to-day changes in local and national procedures and policy. In October 2003, TSA reported plans to have an estimated 350 airports online with high-speed connectivity within 6 months. However, in June 2004, TSA reported that it did not have the resources to reach this goal. TSA records show that as of October 2004, TSA had provided high-speed access for training purposes to just 109 airports, where 1,726 training computers were fully connected. These 109 airports had an authorized staffing level of over 24,900 screeners, meaning that nearly 20,100 screeners (45 percent of TSA’s authorized screening workforce) still did not have high-speed Internet/intranet access to the Online Learning Center at their training facility. In October 2004, TSA officials stated that TSA’s Office of Information Technology had selected an additional 16 airport training facilities with a total of 205 training computers to receive high- speed connectivity by the end of December 2004. As of January 19, 2005, TSA was unable to confirm that these facilities had received high-speed connectivity. Additionally, they could not provide a time frame for when they expected to provide high-speed connectivity to all airport training facilities because of funding uncertainties. Furthermore, TSA does not have a plan for prioritizing and scheduling the deployment of high-speed connectivity to all airport training facilities once funding is available. Without a plan, TSA’s strategy and timeline for implementing connectivity to airport training facilities is unclear. The absence of such a plan limits TSA’s ability to make prudent decisions about how to move forward with deploying connectivity once funding is available. Figure 5 shows the percentage of airports reported to have high-speed connectivity for their training computers by category of airport as of October 2004. To mitigate airport connectivity issues in the interim, on April 1, 2004, TSA made the Online Learning Center courses accessible through public Internet connections, which enable screeners to log on to the Online Learning Center from home, a public library, or other locations. However, TSA officials stated that the vast majority of screeners who have used the Online Learning Center have logged in from airports with connectivity at their training facilities. TSA also distributes new required training products using multiple delivery channels, including written materials and CD-ROMs for those locations where access to the Online Learning Center is limited. Specifically, TSA officials stated that they provided airports without high-speed connectivity with CD-ROMs for the 50 most commonly used optional commercial courseware titles covering topics such as information technology skills, customer service, and teamwork. Additionally, officials stated that as technical courses are added to the Online Learning Center, they are also distributed via CD-ROM and that until full connectivity is achieved, TSA will continue to distribute new training products using multiple delivery channels. Because of a lack of internal controls, TSA cannot provide reasonable assurance that screeners are completing required training. First, TSA policy does not clearly define responsibility for ensuring that screeners have completed all required training. Additionally, TSA has no formally defined policies or procedures for documenting completion of remedial training, or a system designed to facilitate review of this documentation for purposes of monitoring. Further, TSA headquarters does not have formal policies and procedures for monitoring completion of basic training and lacks procedures for monitoring recurrent training. Finally, at airports without high-speed connectivity, training records must be entered manually, making it challenging for some airports to keep accurate and up- to-date training records. TSA’s current guidance for FSDs regarding the training of the screener workforce does not clearly identify responsibility for tracking and ensuring compliance with training requirements. In a good control environment, areas of authority and responsibility are clearly defined and appropriate lines of reporting are established. In addition, internal control standards also require that responsibilities be communicated within an organization. The Online Learning Center provides TSA with a standardized, centralized tool capable of maintaining all training records in one system. It replaces an ad hoc system previously used during initial rollout of federalized screeners in which contractors maintained training records. A February 2004 management directive states that FSDs are responsible for ensuring the completeness, accuracy, and timeliness of training records maintained in the Online Learning Center for their employees. For basic and recurrent training, information is to be entered into the Online Learning Center within 30 days of completion of the training activity. However, the directive does not clearly identify who is responsible for ensuring that employees comply with training requirements. Likewise, a December 2003 directive requiring that screeners complete 3 hours of training per week averaged over a quarter states that FSDs are responsible for ensuring that training records for each screener are maintained in the Online Learning Center. Although both directives include language that requires FSDs to ensure training records are maintained in the Online Learning Center, neither specifies whether FSDs or headquarters officials are responsible for ensuring compliance with the basic, recurrent, and remedial training requirements. Even so, TSA headquarters officials told us that FSDs are ultimately responsible for ensuring screeners receive required training. However, officials provided no documentation clearly defining this responsibility. Without a clear designation of responsibility for monitoring training completion, this function may not receive adequate attention, leaving TSA unable to provide reasonable assurance that its screening workforce receives required training. In April 2005, TSA officials responsible for training stated that they were updating the February 2004 management directive on training records to include a specific requirement for FSDs to ensure that screeners complete required training. They expect to release the revised directive in May 2005. TSA has not established and documented policies and procedures for monitoring completion of basic and recurrent training. Internal control standards advise that internal controls should be designed so that monitoring is ongoing and ingrained in agency operations. However, TSA headquarters officials stated that they have no formal policy for monitoring screeners’ completion of basic training. They also stated that they have neither informal nor formal procedures for monitoring the completion of screeners’ recurrent training requirements, and acknowledged that TSA policy does not address what is to occur if a screener does not meet the recurrent training requirement. Officials further stated that individual FSDs have the discretion to determine what action, if any, to take when screeners do not meet this requirement. In July 2004, TSA training officials stated that headquarters staff recently began running a report in the Online Learning Center to review training records to ensure that newly hired screeners had completed required basic training. In addition, they stated that in June 2004, they began generating summary-level quarterly reports from the Online Learning Center to quantify and analyze hours expended for recurrent screener training. Specifically, TSA training officials stated that reports showing airport-level compliance with the 3-hour recurrent requirement were generated for the third and fourth quarters of fiscal year 2004 and delivered to the Office of Aviation Operations for further analysis and sharing with the field. However, Aviation Operations officials stated that they did not use these reports to monitor the status of screener compliance with the 3-hour recurrent training requirement and do not provide them to the field unless requested by an FSD. TSA training officials said that while headquarters intends to review recurrent training activity on an ongoing basis at a national and airport level, they view FSDs and FSD training staff as responsible for ensuring that individuals receive all required training. Further, they acknowledged that weaknesses existed in the reporting capability of the Online Learning Center and stated that they plan to upgrade the Online Learning Center with improved reporting tools by the end of April 2005. Without clearly defined policies and procedures for monitoring the completion of training, TSA lacks a structure to support continuous assurance that screeners are meeting training requirements. TSA has not established clear policies and procedures for documenting completion of required remedial training. The Standards for Internal Control state that agencies should document all transactions and other significant events and should be able to make this documentation readily available for examination. A TSA training bulletin dated October 15, 2002, specifies that when remedial training is required, FSDs must ensure the training is provided and a remedial training reporting form is completed and maintained with the screener’s local records. However, when we asked to review these records, we found confusion as to how and where they were to be maintained. TSA officials stated that they are waiting for a decision regarding how to maintain these records because of their sensitive nature. In the meantime, where and by whom the records should be maintained remains unclear. In September 2004, officials from TSA’s OIAPR—responsible for conducting covert testing—stated that they maintain oversight to ensure screeners requiring remedial training receive required training by providing a list of screeners that failed covert testing and therefore need remedial training to TSA’s Office of Aviation Operations. Aviation Operations is then to confirm via memo that each of the screeners has received the necessary remedial training and report back to OIAPR. Accordingly, we asked TSA for all Aviation Operations memos confirming completion of remedial training, but we were only able to obtain 1 of the 12 memos. In addition, during our review, we asked to review the remedial training reporting forms at five airports to determine whether screeners received required training, but we encountered confusion about requirements for maintaining training records and inconsistency in record keeping on the part of local TSA officials. Because of the unclear policies and procedures for recording completion of remedial training, TSA does not have adequate assurance that screeners are receiving legislatively mandated remedial training. Although training computers with high-speed Internet/intranet connectivity automatically record completion of training in the Online Learning Center, airports without high-speed access at their training facility must have these records entered manually. The February 2004 management directive that describes responsibility for entering training records into the Online Learning Center also established that all TSA employees are required to have an official TSA training record in the Online Learning Center that includes information on all official training that is funded wholly or in part with government funds. Without high- speed access, TSA officials stated that it can be a challenge for airports to keep the Online Learning Center up to date with the most recent training records. TSA headquarters officials further stated that when they want to track compliance with mandatory training such as ethics or civil rights training, they provide the Training Coordinators with a spreadsheet on which to enter the data rather than relying on the Online Learning Center. As one FSD told us, without high-speed connectivity at several of the airports he oversees, “this is very time consuming and labor intensive and strains my limited resources.” The difficulty that airports encounter in maintaining accurate records when high-speed access is absent could compromise TSA’s ability to provide reasonable assurance that screeners are receiving mandated basic and remedial training. TSA has improved its efforts to measure and enhance screener performance. However, these efforts have primarily focused on passenger screening rather than checked baggage screening, and TSA has not yet finalized performance targets for several key performance measures. For example, TSA has increased the amount of covert testing it performs at airports. These tests have identified that, overall, weaknesses and vulnerabilities continue to exist in the passenger and checked baggage screening systems. TSA also enabled FSDs to conduct local covert testing, fully deployed the Threat Image Projection (TIP) system to passenger screening checkpoints at commercial airports nationwide, and completed the 2003/2004 annual screener recertification program for all eligible screeners. However, not all of these performance measurement and enhancement tools are available for checked baggage screening. Specifically, TIP is not currently operational at checked baggage screening checkpoints, and the recertification program does not include an image recognition component for checked baggage screeners. However, TSA is taking steps to address the overall imbalance in passenger and checked baggage screening performance data, including working toward implementing TIP for checked baggage screening and developing an image recognition module for checked baggage screener recertification. To enhance screener and screening system performance, TSA has also conducted a passenger screener performance improvement study and subsequently developed an improvement plan consisting of multiple action items, many of which TSA has completed. However, TSA has not conducted a similar study for checked baggage screeners. In addition, TSA has established over 20 performance measures for the passenger and checked baggage screening systems as well as two performance indexes (one for passenger and one for checked baggage screening). However, TSA has not established performance targets for each of the component indicators within the indexes, such as covert testing. According to The Office of Management and Budget, performance goals are target levels of performance expressed as a measurable objective, against which actual achievement can be compared. Performance goals should incorporate measures (indicators used to gauge performance); targets (characteristics that tell how well a program must accomplish the measure), and time frames. Without these targets, TSA’s performance management system, and these performance indexes, specifically, may not provide the agency with the complete information necessary to assess achievements and make decisions about where to direct performance improvement efforts. Although TSA has not yet established performance targets for each of the component indicators, TSA plans to finalize performance targets for the indicators by the end of fiscal year 2005. TSA headquarters has increased the amount of covert testing it performs and enabled FSDs to conduct additional local covert testing at passenger screening checkpoints. TSA’s OIAPR conducts unannounced covert tests of screeners to assess their ability to detect threat objects and to adhere to TSA-approved procedures. These tests, in which undercover OIAPR inspectors attempt to pass threat objects through passenger screening checkpoints and in checked baggage, are designed to measure vulnerabilities in passenger and checked baggage screening systems and to identify systematic problems affecting performance of screeners in the areas of training, policy, and technology. TSA considers its covert testing as a “snapshot” of a screener’s ability to detect threat objects at a particular point in time and as one of several indicators of system wide screener performance. OIAPR conducts tests at passenger screening checkpoints and checked baggage screening checkpoints. According to OIAPR, these tests are designed to approximate techniques terrorists might use. These covert test results are one source of data on screener performance in detecting threat objects as well as an important mechanism for identifying areas in passenger and checked baggage screening needing improvement. In testimony before the 9/11 Commission, the Department of Transportation Inspector General stated that emphasis must be placed on implementing an aggressive covert testing program to evaluate operational effectiveness of security systems and equipment. Between September 10, 2002, and September 30, 2004, OIAPR conducted a total of 3,238 covert tests at 279 different airports. In September 2003, we reported that OIAPR had conducted limited covert testing but planned to double the amount of tests it conducted during fiscal year 2004, based on an anticipated increase in its staff from about 100 full-time equivalents to about 200 full-time equivalents. TSA officials stated that based on budget constraints, OIAPR’s fiscal year 2004 staffing authorization was limited to 183 full-time-equivalents, of which about 60 are located in the field. Despite a smaller than expected staff increase, by the end of the second quarter of fiscal year 2004, OIAPR had already surpassed the number of tests it performed during fiscal year 2003, as shown in table 3. In October 2003, OIAPR committed to testing between 90 and 150 airports by April 2004 as part of TSA’s short-term screening performance improvement plan. OAIPR officials stated that this was a onetime goal to increase testing. This initiative accounts for the spike in testing for the second quarter of fiscal year 2004. OIAPR has created a testing schedule designed to test all airports at least once during a 3-year time frame. Specifically, the schedule calls for OIAPR to test all category X airports once a year, category I and II airports once every 2 years, and category III and IV airports at least once every 3 years. In September 2003 and April 2004, we reported that TSA covert testing results had identified weaknesses in screeners’ ability to detect threat objects. More recently, in April 2005, we, along with the DHS OIG, identified that screener performance continued to be a concern. Specifically, our analysis of TSA’s covert testing results for tests conducted between September 2002 and September 2004 identified that overall, weaknesses still existed in the ability of screeners to detect threat objects on passengers, in their carry-on bags, and in checked baggage. Covert testing results in this analysis cannot be generalized either to the airports where the tests were conducted or to airports nationwide. These weaknesses and vulnerabilities were identified at airports of all sizes, at airports with federal screeners, and airports with private-sector screeners. For the two-year period reviewed, overall failure rates for covert tests (passenger and checked baggage) conducted at airports using private- sector screeners were somewhat lower than failure rates for the same tests conducted at airports using federal screeners for the airports tested during this period. Since these test results cannot be generalized as discussed above, each airport’s test results should not be considered a comprehensive measurement of the airport’s performance or any individual screener’s performance in detecting threat objects, or in determining whether airports with private sector screeners performed better than airports with federal screeners. On the basis of testing data through September 30, 2004, we determined that OIAPR had performed covert testing at 61 percent of the nation’s commercial airports. TSA has until September 30, 2005, to test the additional 39 percent of airports and meet its goal of testing all airports within 3 years. Although officials stated that they have had to divert resources from airport testing to conduct testing of other modes and that testing for other modes of transportation may affect their ability to conduct airport testing, they still expect to meet the goal. In February 2004, TSA provided protocols to help FSDs conduct their own covert testing of local airport passenger screening activities—a practice that TSA had previously prohibited. Results of local testing using these protocols are to be entered into the Online Learning Center. This information, in conjunction with OAIPR covert test results and TIP threat detection results, is intended to assist TSA in identifying specific training and performance improvement efforts. In February 2005, TSA released a general procedures document for local covert testing at checked baggage screening locations. TSA officials said that they had not yet begun to use data from local covert testing to identify training and performance needs because of difficulties in ensuring that local covert testing is implemented consistently nationwide. These officials said that after a few months of collecting and assessing the data, they will have a better idea of how the data can be used. TSA has nearly completed the reactivation of the TIP system at airports nationwide and plans to use data it is collecting to improve the effectiveness of the passenger screening system. TIP is designed to test passenger screeners’ detection capabilities by projecting threat images, including guns, knives, and explosives, onto bags as they are screened during actual operations. Screeners are responsible for identifying the threat image and calling for the bag to be searched. Once prompted, TIP identifies to the screener whether the threat is real and then records the screener’s performance in a database that could be analyzed for performance trends. TSA is evaluating the possibility of developing an adaptive functionality to TIP. Specifically, as individual screeners become proficient in identifying certain threat images, such as guns or knives, they will receive fewer of those images and more images that they are less proficient at detecting, such as improvised explosive devices. TIP was activated by FAA in 1999 with about 200 threat images, but it was shut down immediately following the September 11 terrorist attacks because of concerns that it would result in screening delays and panic, as screeners might think that they were actually viewing threat objects. In October 2003, TSA began reactivating and expanding TIP. In April 2004, we reported that TSA was reactivating TIP with an expanded library of 2,400 images at all but one of the more than 1,800 checkpoint lanes nationwide. To further enhance screener training and performance, TSA also plans to develop at least an additional 50 images each month. Despite these improvements, TIP is not yet available for checked baggage screening. In April 2004, we reported that TSA officials stated that they were working to resolve technical challenges associated with using TIP for checked baggage screening on EDS machines and have started EDS TIP image development. The DHS OIG reported in September 2004 that TSA plans to implement TIP on all EDS machines at checked baggage stations nationwide in fiscal year 2005. However, in December 2004, TSA officials stated that because of severe budget reductions, TSA will be unable to begin implementing a TIP program for checked baggage in fiscal year 2005. They did not specify when such a program might begin. TSA plans to use TIP data to improve the passenger screening system in two ways. First, TIP data can be used to measure screener threat detection effectiveness by different threats. Second, TSA plans to use TIP results to help identify specific recurrent training needs within and across airports and to tailor screeners’ recurrent training to focus on threat category areas that indicate a need for improvement. TSA considers February 2004 as the first full month of TIP reporting with the new library of 2,400 images. TSA began collecting these data in early March 2004 and is using the data to determine more precisely how they can be used to measure screener performance in detecting threat objects and to determine what the data identify about screener performance. TSA does not currently plan to use TIP data as an indicator of individual screener performance because TSA does not believe that TIP by itself adequately reflects a screener’s performance. Nevertheless, in April 2004, TSA gave FSDs the capability to query and analyze TIP data in a number of ways, including by screener, checkpoint, and airport. FSDs for over 60 percent of the airports included in our airport-specific survey stated that they use or plan to use TIP data as a source of information in their evaluations of individual screener performance. Additionally, FSDs for 50 percent of the airports covered in our survey reported using data generated by TIP to identify specific training needs for individual screeners. In September 2004, the DHS OIG reported that TSA is assessing the cost and feasibility of modifying TIP so that it recognizes and responds to specific threat objects with which individual screeners are most and least competent in detecting, over time. This feature would increase the utility of TIP as a training tool. The DHS OIG also reported that TSA is considering linking TIP over a network, which would facilitate TSA’s collection, analysis, and information-sharing efforts around TIP user results. The report recommended that TSA continue to pursue each of these initiatives, and TSA agreed. However, in December 2004, TSA officials stated that the availability of funding will determine whether or not they pursue these efforts further. TSA has completed its first round of the screener recertification program, and the second round is now under way. However, TSA does not currently include an image recognition component in the test for checked baggage screener recertification. ATSA requires that each screener receive an annual proficiency review to ensure he or she continues to meet all qualifications and standards required to perform the screening function. In September 2003, we reported that TSA had not yet implemented this requirement. To meet this requirement, TSA established a recertification program, and it began recertification testing in October 2003 and completed the testing in March 2004. The first recertification program was composed of two assessment components, one of screeners’ performance and the other of screeners’ knowledge and skills. During the performance assessment component of the recertification program, screeners are rated on both organizational and individual goals, such as maintaining the nation’s air security, vigilantly carrying out duties with utmost attention to tasks that will prevent security threats, and demonstrating the highest levels of courtesy to travelers to maximize their levels of satisfaction with screening services. The knowledge and skills assessment component consists of three modules: (1) knowledge of standard operating procedures, (2) image recognition, and (3) practical demonstration of skills. Table 4 provides a summary of these three modules. To be recertified, screeners must have a rating of “met” or “exceeded” standards on their annual performance assessments and have passed each of the applicable knowledge and skills modules. Screeners that failed any of the three modules were to receive study time or remedial training as well as a second opportunity to take and pass the modules. Screeners who failed on their second attempt were to be removed from screening duties and subject to termination. Screeners could also be terminated for receiving a rating of below “met” standards. TSA completed its analysis of the recertification testing and performance evaluations in May 2004. TSA’s analysis shows that less than 1 percent of screeners subject to recertification failed to complete this requirement. Figure 6 shows the recertification results. Across all airports screeners performed well on the recertification testing. Over 97 percent of screeners passed the standard operating procedures test on their first attempt. Screeners faced the most difficulty on the practical demonstration of skills component. However, following remediation, 98.6 percent of the screeners who initially failed this component passed on their second attempt. Table 5 shows the results of the recertification testing by module. As shown in table 6, screeners hired as checked baggage screeners were not required to complete the image recognition module in the first round of the recertification testing. In addition, during the first year of recertification testing, which took place from October 2003 through May 2004, dual-function screeners who were actively working as both passenger and checked baggage screeners were required to take only the recertification test for passenger screeners. They were therefore not required to take the recertification testing modules required for checked baggage, even though they worked in that capacity. TSA began implementing the second annual recertification testing in October 2004 and plans to complete it no later than June 2005. This recertification program includes components for dual-function screeners. However, TSA still has not included an image recognition module for checked baggage screeners—which would include dual-function screeners performing checked baggage screening. TSA officials stated that a decision was made to not include an image recognition module for checked baggage screeners during this cycle because not all checked baggage screeners would have completed training on the onscreen resolution protocol by the time recertification testing was conducted at their airports. In December 2004, TSA officials stated that they plan on developing an image recognition module for checked baggage and dual- function screeners, and that this test should be available for next year’s recertification program. The development and implementation of the image recognition test will be contingent, they stated, upon the availability of funds. TSA has implemented a number of improvements designed to enhance screener performance, based on concerns it identified in a July 2003 Passenger Screener Performance Improvement Study and recommendations from OIAPR. To date, however, these efforts have primarily focused on the performance of passenger screeners, and TSA has not yet undertaken a comparable performance study for checked baggage screeners. The Passenger Screener Performance Improvement Study relied in part on the findings of OIAPR’s covert testing. At the time the study was issued, OIAPR had conducted fewer than 50 tests of checked baggage screeners. The July 2003 study focused on and included numerous recommendations for improving the performance of passenger screeners, but recommended waiting to analyze the performance of checked baggage screeners until some time after implementation of the recommendations, some of which TSA indicated, also applied to checked baggage screeners. TSA officials told us that this analysis has been postponed until they have reviewed the impact of implementing the recommendations on passenger screening performance. In October 2003, to address passenger screener performance deficiencies identified in the Screener Performance Improvement Study, TSA developed a Short-Term Screening Performance Improvement Plan. This plan included specific action items in nine broad categories—such as enhance training, increase covert testing, finish installing TIP, and expedite high-speed connectivity to checkpoints and training computers— that TSA planned to pursue to provide tangible improvements in passenger screener performance and security (see app. IV for additional information on the action items). In June 2004, TSA reported that it had completed 57 of the 62 specific actions. As of December 2004, two of these actions still had not been implemented—full deployment of high-speed connectivity and a time and attendance package—both of which continue to be deferred pending the identification of appropriate resources. In addition to the Performance Improvement Study and corresponding action plans, TSA’s OIAPR makes recommendations in its reports on covert testing results. These recommendations address deficiencies identified during testing and are intended to improve screening effectiveness. As of December 2004, OIAPR had issued 18 reports to TSA management on the results of its checkpoint and checked baggage covert testing. These reports include 14 distinct recommendations, some of which were included in TSA’s screener improvement action plan. All but two of these reports included recommendations on corrective actions needed to enhance the effectiveness of passenger and checked baggage screening. TSA has established performance measures, indexes, and targets for the passenger and checked baggage screening systems, but has not established targets for the various components of the screening indexes. The Government Performance and Results Act of 1993 provides, among other things, that federal agencies establish program performance measures, including the assessment of relevant outputs and outcomes of each program activity. Performance measures are meant to cover key aspects of performance and help decision makers to assess program accomplishments and improve program performance. A performance target is a desired level of performance expressed as a tangible, measurable objective, against which actual achievement will be compared. By analyzing the gap between target and actual levels of performance, management can target those processes that are most in need of improvement, set improvement goals, and identify appropriate process improvements or other actions. An April 2004 consultant study commissioned by TSA found that FSDs and FSD staffs generally believed the lack of key performance indicators available to monitor passenger and checked baggage screening performance represented a significant organizational weakness. Since then, TSA has established over 20 performance measures for the passenger and checked baggage screening systems. For example, TSA measures the percentage of screeners meeting a threshold score on the annual recertification testing on their first attempt, the percentage of screeners scoring above the national standard level on TIP performance, and the number of passengers screened, by airport category. TSA also has developed two performance indexes to measure the effectiveness of the passenger and checked baggage screening systems. These indexes measure overall performance through a composite of indicators and are derived by combining specific performance measures relating to passenger and checked baggage screening, respectively. Specifically, these indexes measure the effectiveness of the screening systems through machine probability of detection and covert testing results; efficiency through a calculation of dollars spent per passenger or bag screened; and customer satisfaction through a national poll, customer surveys, and customer complaints at both airports and TSA’s national call center. According to TSA officials, the agency has finalized targets for the two overall indexes, but these targets have not yet been communicated throughout the agency. Further, TSA plans to provide the FSDs with only the performance index score, not the value of each of the components, because the probabilities of detection are classified as secret and TSA is concerned that by releasing components, those probabilities could be deduced. Table 7 summarizes the components of the performance indexes developed by TSA. TSA has not yet established performance targets for the various components of the screening indexes, including performance targets for covert testing (person probability of detection). TSA’s strategic plan states that the agency will use the performance data it collects to make tactical decisions based on performance. The screening performance indexes developed by TSA can be a useful analysis tool, but without targets for each component of the index, TSA will have difficulty performing meaningful analyses of the parts that add up to the index. For example, without performance targets for covert testing, TSA will not have identified a desired level of performance related to screener detection of threat objects. Performance targets for covert testing would enable TSA to focus its improvement efforts on areas determined to be most critical, as 100 percent detection capability may not be attainable. In January 2005, TSA officials stated that the agency plans to track the performance of individual index components and establish performance targets against which to measure these components. They further stated that they are currently collecting and analyzing baseline data to establish these targets and plan to finalize them by the end of fiscal year 2005. It has been over 2 years since TSA assumed responsibility for passenger and checked baggage screening operations at the nation’s commercial airports. TSA has made significant accomplishments over this period in meeting congressional mandates related to establishing these screening operations. With the congressional mandates now largely met, TSA has turned its attention to assessing and enhancing the effectiveness of its passenger and checked baggage screening systems. An important tool in enhancing screener performance is ongoing training. As threats and technology change, the training and development of screeners to ensure they have the competencies—knowledge, skills, abilities, and behaviors— needed to successfully perform their screening functions become vital to strengthening aviation security. Without addressing the challenges to delivering ongoing training, including installing high-speed connectivity at airport training facilities, TSA may have difficulty maintaining a screening workforce that possesses the critical skills needed to perform at a desired level. In addition, without adequate internal controls designed to help ensure screeners receive required training that are also communicated throughout the agency, TSA cannot effectively provide reasonable assurances that screeners receive all required training. Given the importance of the Online Learning Center in both delivering training and serving as the means by which the completion of screener training is documented, TSA would benefit from having a clearly defined plan for prioritizing the deployment of high-speed Internet/intranet connectivity to all airport training facilities. Such a plan would help enable TSA to move forward quickly and effectively in deploying high-speed connectivity once funding is available. Additionally, history demonstrates that U.S. commercial aircraft have long been a target for terrorist attacks through the use of explosives carried in checked baggage, and covert testing conducted by TSA and DHS OIG have identified that weaknesses and vulnerabilities continue to exist in the passenger and checked baggage screening systems, including the ability of screeners to detect threat objects. While covert test results provide an indicator of screening performance, they cannot solely be used as a comprehensive measure of any airport’s screening performance or any individual screener’s performance, or in determining the overall performance of federal versus private-sector screening. Rather, these data should be considered in the larger context of additional performance data, such as TIP and recertification test results, when assessing screener performance. While TSA has undertaken efforts to measures and strengthen performance, these efforts have primarily focused on passenger screening and not on checked baggage screening. TSA’s plans for implementing TIP for checked baggage screening, and establishing an image recognition component for the checked baggage screeners recertification testing—plans made during the course of our review— represent significant steps forward in its efforts to strengthen checked baggage screening functions. Additionally, although TSA has developed passenger and checked baggage screening effectiveness measures, the agency has not yet established performance targets for the individual components of these measures. Until such targets are established, it will be difficult for TSA to draw more meaningful conclusions about its performance and how to most effectively direct its improvement efforts. For example, performance targets for covert testing would enable TSA to focus its improvement efforts on areas determined to be most critical, as 100 percent detection capability may not be attainable. We are encouraged by TSA’s recent plan to establish targets for the individual components of the performance indexes. This effort, along with the additional performance data TSA plans to collect on checked baggage screening operations, should assist TSA in measuring and enhancing screening performance and provide TSA with more complete information with which to prioritize and focus its screening improvement efforts. To help ensure that all screeners have timely and complete access to screener training available in the Online Learning Center and to help provide TSA management with reasonable assurance that all screeners are receiving required passenger and checked baggage screener training, we recommend that the Secretary of the Department of Homeland Security direct the Assistant Secretary, Transportation Security Administration, to take the following two actions: develop a plan that prioritizes and schedules the deployment of high-speed Internet/intranet connectivity to all TSA’s airport training facilities to help facilitate the delivery of screener training and the documentation of training completion, and develop internal controls, such as specific directives, clearly defining responsibilities for monitoring and documenting the completion of required training, and clearly communicate these responsibilities throughout the agency. We provided a draft of this report to DHS for review and comment. On February 4, 2005, we received written comments on the draft report, which are reproduced in full in appendix V. DHS generally concurred with the findings and recommendations in the report, and agreed that efforts to implement our recommendations are critical to successful passenger and checked baggage screening training and performance. With regard to our recommendation that TSA develop a plan that prioritizes and schedules the deployment of high-speed Internet/intranet connectivity to all TSA’s airport training facilities, DHS stated that TSA has developed such a plan. However, although we requested a copy of the plan several times during our review and after receiving written comments from DHS, TSA did not provide us with a copy of the plan. Therefore, we cannot assess the extent to which the plan DHS referenced in its written comments fulfills our recommendation. In addition, regarding our recommendation that TSA develop internal controls clearly defining responsibilities for monitoring and documenting the completion of required training, and clearly communicate those responsibilities throughout TSA, DHS stated that it is taking steps to define responsibility for monitoring the completion of required training and to insert this accountability into the performance plans of all TSA supervisors. TSA’s successful completion of these ongoing and planned activities should address the concerns we raised in this report. DHS has also provided technical comments on our draft report, which we incorporated where appropriate. As agreed with your office, we will send copies of this report to relevant congressional committees and subcommittees and to the Secretary of the Department of Homeland Security. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777. Key contributors to this report are listed in appendix VI. The Transportation Security Administration (TSA) had deployed a basic screener training program and required remedial training but had not fully developed or deployed a recurrent training program for screeners or supervisors. TSA had collected little information to measure screener performance in detecting threat objects. TSA’s Office of Internal Affairs and Program Review’s (OIAPR) covert testing was the primary source of information collected on screeners’ ability to detect threat objects. However, TSA did not consider the covert testing a measure of screener performance. TSA was not using the Threat Image Projection system (TIP) but planned to fully activate the system with significantly more threat images than previously used in October 2003. TSA had not yet implemented an annual proficiency review to ensure that screeners met all qualifications and standards required to perform their assigned screening functions. Although little data existed on the effectiveness of passenger screening, TSA was implementing several efforts to collect performance data. Aviation Security: Efforts to Measure Effectiveness and Address Challenges planned to double the number of tests it conducted during fiscal year 2004. TSA only recently began activating TIP on a wide-scale basis and expected it to be fully operational at every checkpoint at all airports by April 2004. TSA only recently began implementing the annual recertification program and did not expect to complete testing at all airports until March 2004. TSA was developing performance indexes for individual screeners and the screening system as a whole but had not fully established these indexes. TSA expected to have them in place by the end of fiscal year 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs working with the U.S. Department of Agriculture’s Graduate School to tailor its off-the-shelf supervisory course to meet the specific training needs of screening supervisors. While TSA had taken steps to enhance its screener training programs, staffing imbalances, and lack of high-speed connectivity at airport training facilities had made it difficult for screeners at some airports to fully utilize these programs. Although TSA was making progress in measuring the performance of passenger screeners, it had collected limited performance data related to its checked baggage screening operations. However, TSA had begun collecting additional performance data related to its checked baggage screening operations and planned to increase these efforts in the future. As part of its efforts to develop performance indexes, TSA was developing baseline data for fiscal year 2004 and planned to report the indexes to DHS in fiscal year 2005. With the exception of covert testing and recent TIP data, data were not yet available to assess how well screeners were performing and what steps if any TSA needed to take to improve performance. Also, TSA was not using TIP as a formal indicator of screening performance, but instead was using it to identify individual screener training needs. To examine efforts by the Transportation Security Administration to enhance their passenger and checked baggage screening programs, we addressed the following questions: (1) What actions has TSA taken to enhance training for screeners and supervisors? (2) How does TSA monitor compliance with screener training requirements? (3) What is the status of TSA’s efforts to assess and enhance screener performance in detecting threat objects? To determine how TSA has enhanced training for screeners and supervisors and how TSA has monitored compliance with screener training requirements, we obtained and analyzed relevant legislation, as well as TSA’s training plans, guidance, and curriculum. We reviewed data from TSA’s Online Learning Center and assessed the reliability of the Online Learning Center database. We compared TSA’s procedures for ensuring that screeners receive required training according to Standards for Internal Controls in the Federal Government. We interviewed TSA officials from the Office of Workforce Performance and Training and the Office of Aviation Operations in Arlington, Virginia. At the airports we visited, we interviewed Federal Security Directors and their staffs, such as Training Coordinators. We also met with officials from four aviation associations—the American Association of Airport Executives, Airports Council International, the Air Transport Association, and the Regional Airline Association. We did not assess the methods used to develop TSA’s screener training program, nor did we analyze the contents of TSA’s curriculum. Although we could not independently verify the reliability of all of this information, we compared the information with other supporting documents, when available, to determine data consistency and reasonableness. We found the data to be sufficiently reliable for our purposes. To determine what efforts TSA has taken to assess and to enhance screener performance in detecting threat objects, we reviewed related reports from the Department of Transportation and the Department of Homeland Security (DHS) Inspector General, Congressional Research Service, and TSA, as well as prior GAO reports. We obtained and reviewed TSA’s covert test data and results of the annual recertification testing. (Results of the covert testing are classified and will be the subject of a separate classified GAO report.) We discussed methods for inputting, compiling, and maintaining the data with TSA officials. We also assessed the methodology of TSA’s covert tests and questioned OIAPR officials about the procedures used to ensure the reliability of the covert test data. When we found discrepancies between the data OIAPR maintained in spreadsheets and the data included in the hard copy reports we obtained from TSA, we worked with OIAPR to resolve the discrepancies. Further, we visited TSA headquarters to review TSA’s annual recertification testing modules and discuss TSA’s process for validating the recertification exams. As a result, we determined that the data provided by TSA were sufficiently reliable for the purposes of our review. We also reviewed TSA’s performance measures, targets, and indexes. Finally, we interviewed TSA headquarters officials from several offices in Arlington, Virginia, including Aviation Operations, Workforce Performance and Training, Strategic Management and Analysis, and Internal Affairs and Program Review. In addition, in accomplishing our objectives, we also conducted site visits at select airports nationwide to interview Federal Security Directors and their staffs and conducted two Web-based surveys of Federal Security Directors. Specifically, we conducted site visits at 29 airports (13 category X airports, 9 category I airports, 3 category II airports, 3 category III airports, and 1 category IV airport) to observe airport security screening procedures and discuss issues related to the screening process with TSA, airport, and airline officials. We chose these airports to obtain a cross- section of all airports by size and geographic distribution. In addition, we selected each of the five contract screening pilot airports. The results from our airport visits provide examples of screening operations and issues but cannot be generalized beyond the airports visited because we did not use statistical sampling techniques in selecting the airports. The category X airports we visited were Baltimore Washington International Airport, Boston Logan International Airport, Chicago O’Hare International Airport, Dallas/Fort Worth International Airport, Denver International Airport, Washington Dulles International Airport, John F. Kennedy International Airport, Los Angeles International Airport, Newark Liberty International Airport, Orlando International Airport, Ronald Reagan Washington National Airport, San Francisco International Airport, Seattle-Tacoma International Airport. The category I airports we visited were Burbank- Glendale-Pasadena Airport, John Wayne Airport, Chicago Midway International Airport, Dallas Love Field, Kansas City International Airport, Little Rock National Airport, Metropolitan Oakland International Airport, Portland International Airport, and Tampa International Airport. The category II airports we visited were Jackson International Airport, Dane County Regional Airport, and Greater Rochester International Airport. The category III airports we visited were Idaho Falls Regional Airport, Jackson Hole Airport, and Orlando Sanford International Airport. The category IV airport we visited was Tupelo Regional Airport. Further, we administered two Web-based surveys to all 155 Federal Security Directors who oversee security at each of the airports falling under TSA’s jurisdiction. One survey, the general survey, contained questions covering local and national efforts to train screeners and supervisors and the status of TSA’s efforts to evaluate screener performance, including the annual recertification program and TIP. The second survey attempted to gather more specific airport security information on an airport(s) under the Federal Security Director’s supervision. For the airport-specific survey, each Federal Security Director received one or two surveys to complete, depending on the number of airports they were responsible for. Where a Federal Security Director was responsible for more than two airports, we selected the first airport based on the Federal Security Director’s location and the second airport to obtain a cross-section of all airports by size and geographic distribution. In all, we requested information on 265 airports. However, two airports were dropped from our initial selection because the airlines serving these airports suspended operations and TSA employees were redeployed to other airports. As a result our sample size was reduced to 263 airports, which included all 21 category X, and 60, 49, 73, and 60 category I through IV airports respectively. In that we did not use probability sampling methods to select the sample of airports that were included in our airport-specific survey, we cannot generalize our findings beyond the selected airports. A GAO survey specialist designed the surveys in combination with other GAO staff knowledgeable about airport security issues. We conducted pretest interviews with six Federal Security Directors to ensure that the questions were clear, concise, and comprehensive. In addition, TSA managers and an independent GAO survey specialist reviewed the survey. We conducted these Web-based surveys from late March to mid-May 2004. We received completed general surveys from all 155 Federal Security Directors and completed airport-specific surveys for all 263 separate airports for which we sought information, for 100 percent response rates. The surveys’ results are not subject to sampling errors because all Federal Security Directors were asked to participate in the surveys and we did not use probability-sampling techniques to select specific airports. However, the practical difficulties of conducting any survey may introduce other errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the surveys, the data collection, and the data editing and analysis to minimize these nonsampling errors. Also, in that these were Web-based surveys whereby respondents entered their responses directly into our database, there was little possibility of data entry or transcription error. In addition, all computer programs used to analyze the data were peer reviewed and verified to ensure that the syntax was written and executed correctly. We performed our work from May 2003 through April 2005 in accordance with generally accepted government auditing standards. Certain information we obtained and analyzed regarding screener training and performance are classified or are considered by TSA to be sensitive security information. Accordingly, the results of our review of this information are not included in this report. This tool allows screeners to touch actual improvised explosive device (IED) components and build their own devices. This experiential learning will enable screeners to more readily detect real IEDs during screening. These weapons are also used to assist in training by using them for live testing conducted by FSD staff. This tool allows screeners to touch actual firearms and begin to understand how they can be broken down into various parts. By understanding this and experiencing it, screeners are better able to see the components of a firearm during actual screening. These weapons are also used to assist in training by using them for live testing conducted by FSD staff. Deployed January 26, 2004 Maintain and enhance the screeners’ X-ray image operational skills. Deployed February 5, 2004 Provide a tool that includes about 14,000 image combinations to practice threat identification. These teams go into airports where data shows performance needs attention. The team offers a variety of services to assist in improving the performance, such as on-the-spot training and consulting services. Team visits can be initiated by FSDs, Internal Affairs reports, Quality Assurance trips, or MTAT Supervisors proactively visiting the airport and FSD. Site visits completed from October 2003 through December 3, 2004: North Central (37 visits) South Central (51 visits) Northeast (25 visits) Southeast (60 visits) Western (53 visits) Improve screener supervisors’ knowledge of federal government and TSA personnel rules and how to effectively coach and communicate with employees. Approximately 3,800 supervisors have been trained. Certification of screeners to perform supervisory maintenance tasks above and beyond operator training. Provide students with basic skills needed to verify the identity of flying armed law enforcement officers. This weekly product brings to light actual cases of weapons being found by law enforcement, with an explanation of how those weapons could be used to attack aviation. Provide interactive, performance based recurrent Web-based training modules for checked baggage explosive detection systems (EDS). Improve screener performance by providing an interactive tool complementary to Hand Held Metal Detector and Pat Down Video that allows the screener to practice proper techniques and receive immediate feedback. Reinforces TSA’s customer service principles and places the screener in various situations requiring effective customer service responses. Provide interactive, performance-based recurrent training modules for checkpoint and checked baggage operations. Physical Bag Search Video Maintain and enhance screeners’ explosive trace detection (ETD) and physical bag search skills for carry-on and checked baggage. Provide interactive recurrent Web-based training modules for ETD and physical bag search. Provide an interactive, performance-based training tool to enhance screener’s ability to identify prohibited items. Provide an informative and effective learning tool to maintain and enhance the skills of screeners in the areas of persons with prosthetics. Provide a tool to practice threat identification with about 10,000,000 image combinations. Sharing the X-Ray Tutor Version 2 library, this tool will allow screeners to practice finding threat items using the full capabilities of the TIP-ready X-ray machines. Provide an interactive, performance-based tool to convey how the supervisor is to handle screening situations, handed off by the screening, following standard operator procedures. Provide a Web-based training that will engage the student with 3- dimensional representations of the muscular frame, showing proper lifting techniques and the results of improper techniques. In addition to those named above, David Alexander, Leo Barbour, Lisa Brown, Elizabeth Curda, Kevin Dooley, Kathryn Godfrey, David Hooper, Christopher Jones, Stuart Kaufman, Kim Gianopoulos, Thomas Lombardi, Cady S. Panetta, Minette Richardson, Sidney Schwartz, Su Jin Yon, and Susan Zimmerman were key contributors to this report. | The screening of airport passengers and their checked baggage is a critical component in securing our nation's commercial aviation system. Since May 2003, GAO has issued six products related to screener training and performance. This report updates the information presented in the prior products and incorporates results from GAO's survey of 155 Federal Security Directors--the ranking Transportation Security Administration (TSA) authority responsible for the leadership and coordination of TSA security activities at the nation's commercial airports. Specifically, this report addresses (1) actions TSA has taken to enhance training for passenger and checked baggage screeners and screening supervisors, (2) how TSA ensures that screeners complete required training, and (3) actions TSA has taken to measure and enhance screener performance in detecting threat objects. TSA has initiated a number of actions designed to enhance screener training, such as updating the basic screener training course. TSA also established a recurrent training requirement and introduced the Online Learning Center, which makes self-guided training courses available over TSA's intranet and the Internet. Even with these efforts, Federal Security Directors reported that insufficient screener staffing and a lack of high-speed Internet/intranet connectivity at some training facilities have made it difficult to fully utilize training programs and to meet the recurrent training requirement of 3 hours per week, averaged over a quarter year, within regular duty hours. TSA acknowledged that challenges exist in recurrent training delivery and is taking steps to address these challenges, including factoring training into workforce planning efforts and distributing training through written materials and CD-ROMs. However, TSA has not established a plan prioritizing the deployment of high-speed Internet/intranet connectivity to all airport training facilities to facilitate screener access to training materials. TSA lacks adequate internal controls to provide reasonable assurance that screeners receive legislatively mandated basic and remedial training, and to monitor its recurrent training program. Specifically, TSA policy does not clearly specify the responsibility for ensuring that screeners have completed all required training. In addition, TSA officials have no formal policies or methods for monitoring the completion of required training and were unable to provide documentation identifying the completion of remedial training. TSA has implemented and strengthened efforts to measure and enhance screener performance. For example, TSA has increased the number of covert tests it conducts at airports, which test screeners' ability to detect threat objects on passengers, in their carry-on baggage, and in checked baggage. These tests identified that overall, weaknesses and vulnerabilities continue to exist in passenger and checked baggage screening systems at airports of all sizes, at airports with federal screeners, and at airports with private-sector screeners. While these test results are an indicator of performance, they cannot solely be used as a comprehensive measure of any airport's screening performance or any individual screener's performance. We also found that TSA's efforts to measure and enhance screener performance have primarily focused on passenger screening, not checked baggage screening. For example, TSA only uses threat image software on passenger screening X-ray machines, and the recertification testing program does not include an image recognition module for checked baggage screeners. TSA is taking steps to address the overall imbalance in passenger and checked baggage screening performance data. TSA also established performance indexes for the passenger and checked baggage screening systems, to identify an overall desired level of performance. However, TSA has not established performance targets for each of the component indicators that make up the performance indexes, including performance targets for covert testing. TSA plans to finalize these targets by the end of fiscal year 2005. |
The Smithsonian Institution was founded in 1846 and is the world’s largest museum and research complex, consisting of 19 museums and galleries, the National Zoological Park, and nine research facilities. Of the 137 million artifacts, works of art, and specimens in the Smithsonian’s collections, about 126 million are held by the Natural History Museum and about 825,000 are held by the American Indian Museum. Pursuant to the NMAI Act, the American Indian Museum’s collection was transferred to the Smithsonian from the former Museum of the American Indian in New York City, founded by George Gustav Heye, and contains items from North America, South America, Central America, and the Caribbean. After the NMAI Act was enacted, in 1989, the American Indian Museum officially assumed control of the Heye collection in June 1990, and the collection was physically moved from New York to a newly constructed cultural resources center near Washington, D.C., from 1999 to 2004. The new American Indian Museum in Washington, D.C., opened its doors to the public in 2004. The Smithsonian has acquired a large number of Indian human remains and culturally significant objects through a variety of means. For example, in the late 1800s, the Surgeon General of the Army requested U.S. military forces to send thousands of Indian human remains from battlefields and burial sites for the purposes of conducting a cranial study. As a result, thousands of sets of human remains were sent to the Army Medical Museum and then later were transferred to the Smithsonian. Other human remains and many more objects have been collected through archaeological excavations and donations. According to museum officials, when new collections are acquired, the Smithsonian assigns an identification number—referred to as a catalog number—to each item or set of items at the time of the acquisition or, in some cases, many years later. A single catalog number may include one or more human bones, bone fragments, or objects, and it may include the remains of one or more individuals. All of this information is stored in the museums’ electronic catalog system, which is partly based on historical paper card catalogs. Generally, each catalog number in the electronic catalog system includes basic information on the item or set of items, such as a brief description of the item, where the item was collected, and when it was taken into the museum’s collection. Since the NMAI Act was enacted, the Smithsonian has identified approximately 19,780 catalog numbers that potentially include Indian human remains (about 19,150 within the Natural History Museum collections and about 630 within the American Indian Museum collections). This number has changed over time as the museums have either cataloged more human remains or identified additional catalog numbers that contain human remains. According to museum officials, Indian human remains, funerary objects, and other objects potentially subject to repatriation are generally organized within the following museum collections: Physical anthropology (Natural History Museum only): This collection consists mostly of human remains but, in rare instances, also some funerary objects. Archaeology: This collection consists of a wide variety of objects, including funerary objects, some human remains, and some potential sacred objects and objects of cultural patrimony. Ethnology: This collection consists of a wide variety of objects, including potential sacred objects and objects of cultural patrimony, and some human remains and funerary objects. The Smithsonian’s overall mission is the increase and diffusion of knowledge, and the American Indian and Natural History Museums implement this overall mission in different ways. The American Indian Museum’s mission is advancing knowledge and understanding of the Native cultures of the Western Hemisphere, past, present, and future, through partnership with Native people and others. The Natural History Museum’s mission is to inspire curiosity, discovery, and learning about nature and culture through outstanding research, collections, exhibitions, and education, but does not specifically refer to partnership with Native people. Both museums have established repatriation offices to carry out their repatriation activities (see fig. 1); the American Indian Museum established an office in November 1993 and the Natural History Museum established an office in September 1991. The repatriation offices within the two museums are independent of each other and have separate staffs and budgets. For fiscal year 2010, the American Indian Museum’s Repatriation Office had a budget of approximately $580,000 and consisted of five staff—a program manager, repatriation coordinator, and three case officers. In the same fiscal year, the Natural History Museum’s Repatriation Office had a budget of approximately $1.7 million (including funding for the Review Committee) and consisted of 11 staff—including a program manager, three case officers, and a lab director with six technical staff. One of the purposes of the 1996 amendments to the NMAI Act was to ensure that the requirements for the inventory, identification, and repatriation of human remain and objects in the Smithsonian’s possession are being carried out in a manner consistent with NAGPRA. NAGPRA requires each federal agency and museum with NAGPRA items in its collections to (1) compile an inventory of Native American human remains and associated funerary objects; (2) compile a summary of Native American unassociated funerary objects, sacred objects, and objects of cultural patrimony; and (3) repatriate culturally affiliated human remains and objects identified through the inventory or summary processes if the terms and conditions prescribed in the act are met. NAGPRA required that the inventories be completed no later than 5 years after its enactment—by November 16, 1995—and that the summaries be completed no later than 3 years after its enactment—by November 16, 1993. NAGPRA included a provision that allows museums that made a good faith effort to carry out an inventory and identification to apply for an extension of the inventory completion deadline. With respect to inventories, NAGPRA requires that they be completed in consultation with tribal government officials, Native Hawaiian organization officials, and traditional religious leaders. Furthermore, in the inventory, federal agencies and museums are required to identify geographic and cultural affiliation to the extent possible based on information in their possession. If a federal agency or museum determined cultural affiliation for human remains and associated funerary objects to a tribe(s) in an inventory, the act requires it to notify the affected tribe(s) no later than 6 months after the completion of the inventory. The agency or museum is also required to provide a copy of each notice to the Secretary of the Interior for publication in the Federal Register. NAGPRA and its implementing regulations generally require that, upon the request of an Indian tribe or Native Hawaiian organization, all culturally affiliated NAGPRA items be returned to the applicable Indian tribe or Native Hawaiian organization expeditiously—within 90 days of receiving the repatriation request but no sooner than 30 days after publication of the notice. However, as we reported in 2010, we found examples where agency officials treated inventories like summaries in that the consultation occurred and cultural affiliation determinations were made after the preparation of the inventory. One of the purposes of the 1996 amendments to the NMAI Act was to ensure that the requirements for the inventory, identification, and repatriation of human remains and objects in the Smithsonian’s possession are being carried out in a manner consistent with NAGPRA, but there remain some differences between the two laws. For example, the 1996 amendments to the NMAI Act adopt NAGPRA’s definition of inventory, but they do not alter the original 1989 requirement to use the “best available scientific and historical documentation” in identifying the origins of the Indian human remains and funerary objects. In addition, the NMAI Act does not contain specific deadlines for notifying culturally affiliated tribes or returning culturally affiliated human remains. Instead, the NMAI Act requires that culturally affiliated tribes be notified “at the earliest opportunity” and that culturally affiliated items be returned “expeditiously.” Some examples of differences between the two acts are summarized in table 2. Section 12 of the NMAI Act requires the Smithsonian to establish a special committee, which the Smithsonian calls the Repatriation Review Committee (referred to hereafter as the Review Committee), and tasks the committee with, for example, ensuring fair and objective consideration and assessment of all relevant evidence with respect to the inventory and identification process; reviewing any finding relating to the origin or the return of remains or objects, upon request; and facilitating the resolution of any dispute with respect to the return of remains or objects. Section 12 lays out other requirements with respect to the Review Committee. For example, it requires the Secretary of the Smithsonian to certify by report to Congress at the conclusion of the work of the committee. It also requires the Secretary to provide administrative support for the committee. The Smithsonian established a charter for the Review Committee, which states that the purpose of the committee is to serve in an advisory capacity to the Secretary of the Smithsonian in matters concerning the repatriation of human remains, funerary objects, sacred objects, and objects of cultural patrimony. The charter also discusses the functions of the committee, duties of its members, and rules of evidence, among other things. The NMAI Act provides the Board of Trustees of the American Indian Museum with certain authority over the museum’s collections. For example, the act states that the Board of Trustees has sole authority, subject to the general policies of the Smithsonian’s Board of Regents, to lend, exchange, sell, or otherwise dispose of any part of the collections of the American Indian Museum. The act also states that nothing in section 11 of the act—which addresses inventories—shall be interpreted as limiting the authority of the Smithsonian to return or repatriate Indian human remains and funerary objects. Furthermore, the 1996 amendments to the NMAI Act add that nothing in the summary section may be construed to prevent the Smithsonian from making an inventory or preparing a written summary or carrying out the repatriation of unassociated funerary objects, sacred objects, or objects of cultural patrimony in a manner that exceeds the requirements of the NMAI Act. Based on the flexibilities provided by the NMAI Act, the American Indian Museum established a repatriation policy that differs from the Natural History Museum’s and the act’s basic repatriation requirements. Under the policy, for example, the American Indian Museum will repatriate items if there is sufficient evidence to establish a “reasonable belief” of cultural affiliation—a lower threshold than the NMAI Act’s basic requirement to repatriate items where cultural affiliation can be established by a “preponderance of the evidence.” Also, the policy states that the American Indian Museum will take into consideration repatriation requests from non-federally recognized tribes, which are not covered by the NMAI Act’s repatriation requirements. The American Indian and Natural History Museums generally prepared summaries and inventories within the deadlines established in the NMAI Act, but their inventories and the process they used to prepare them raise questions about their compliance with some of the statutory requirements. Since 1989, the Smithsonian estimates that it has offered to repatriate the Indian human remains in about one-third of the catalog numbers identified as possibly including human remains. Smithsonian officials that we spoke with identified challenges that the museums face in carrying out their repatriation requirements under the NMAI Act. The American Indian and Natural History Museums generally prepared required documents by the deadlines established in the NMAI Act. The American Indian Museum prepared its first set of inventories in 1993. In an effort to voluntarily follow NAGPRA’s more comprehensive requirements, it included its entire collection in these inventories—not just the human remains and funerary objects it was required to inventory at the time. Museum officials later found that the 1993 inventory did not include an additional 5,000 catalog numbers containing objects. These catalog numbers had never been entered into the museum’s electronic catalog, which was the primary source for the 1993 inventories. As a result, the museum prepared additional inventories in 1995 covering these 5,000 catalog numbers. The museum provided all federally recognized tribes with inventories of the collections that could be affiliated to them. After the enactment of the 1996 amendments, the museum did not revise its inventories or prepare separate summaries because officials believed that the museum had already complied with the new requirements. The Natural History Museum also generally prepared its summary and inventory documents by the statutory deadlines. The museum prepared 171 summaries of its ethnological collection from the United States based on information in its electronic catalog—170 by tribal grouping and 1 for items that could not be associated with any tribal group. Of these 171 summaries, 116 were prepared by the December 31, 1996, deadline established by the 1996 amendments, 50 were completed within 2 months of the deadline, and 5 were completed still later. Some of these summaries were prepared prior to the 1996 amendments’ enactment, since the museum had prepared summaries upon request from tribes in an effort to voluntarily follow NAGPRA’s requirement to prepare summaries. After the 1996 amendments were enacted, the museum provided all federally recognized tribes with summaries of the collections that could be affiliated to them. The museum also prepared 64 inventories of its physical anthropology and archaeology collections from the United States—13 for Alaska regions, 1 for each additional state and the District of Columbia, and 1 for items that could not be associated with a particular state. These inventories identified about 16,000 catalog numbers as possibly including human remains and, according to the museum’s Repatriation Office, about 3,000 catalog numbers as possibly including funerary objects. According to museum officials, these inventories provided specific geographic information for most human remains and, in some cases, specific information about the possible cultural affiliations of the human remains and funerary objects. The Natural History Museum prepared all of its inventories by the June 1, 1998, deadline and provided all federally recognized tribes with inventories of the collections that could be affiliated to them. As with the American Indian Museum, the inventories prepared by the Natural History Museum included potentially many more items than the human remains and funerary objects required by the NMAI Act enacted in 1989. For example, the inventories included the museum’s entire archaeology collection from the United States, which consisted of over 200,000 catalog numbers containing over 1 million objects. Although both museums generally prepared their summaries and inventories by the statutory deadlines, the process for preparing the inventories raises questions about compliance with two of the NMAI Act’s requirements. The first question is the extent to which the museums prepared their inventories in consultation and cooperation with traditional Indian religious leaders and government officials of Indian tribes, as required by the NMAI Act. Section 11 directs the Secretary of the Smithsonian, in consultation and cooperation with traditional Indian religious leaders and government officials of Indian tribes, to inventory the Indian human remains and funerary objects in the possession or control of the Smithsonian and, using the best available scientific and historical documentation, identify the origins of such remains and objects. The 1996 amendments did not alter this language, although they added a definition of inventory. However, the Smithsonian generally began the consultation process with Indian tribes after the inventories from both museums were distributed. The second question is the extent to which the Natural History Museum’s inventories—which were finalized after the 1996 amendments— identified geographic and cultural affiliations to the extent practicable based on information held by the Smithsonian, as required by the amendments. Its inventories generally identified geographic and cultural affiliations only where such information was readily available in the museum’s electronic catalog. In preparing its inventories, the museum did not consult other information that the Smithsonian had in its possession to attempt to identify geographic and cultural affiliations, such as records in the National Anthropological Archives or the Smithsonian Institution Archives, which may have included work papers of collectors and donors. According to the Smithsonian’s legal views and Smithsonian documents, this is one of the reasons why the cultural affiliations in the Natural History Museum’s inventories were tentative. In its legal views, however, the Smithsonian states that it has fully complied with the statutory requirements for preparing inventories. First, the Smithsonian states that the statutory language does not require that consultation occur prior to the inventory being completed. The Smithsonian points to the definition of inventory added by the 1996 amendments in support of its interpretation, noting that one could easily construe the consultation requirement to apply with greater force to the requirement to use the best available scientific and historical documentation to identify the origins of the human remains and objects rather than to the development of the inventories. Second, the Smithsonian states that the law allows the Smithsonian to determine, for itself, what was practicable in order to meet the statutory deadline for completion of the inventories. The Smithsonian acknowledges that neither the American Indian nor the Natural History Museum reviewed each and every source maintained by the Smithsonian for preparing the inventories—including the National Anthropological Archives or individual staff files—because accessing those sources would not have been practicable given the size and scope of the Smithsonian’s collection. Furthermore, according to the Smithsonian’s legal views, the Smithsonian does not interpret section 11 as necessarily requiring that the inventory and identification process occur simultaneously, and therefore it has adopted a two-step process to fulfill section 11’s requirements. The first step is to prepare a detailed listing (the inventory) of the human remains and funerary objects in each museum’s collection using information in the electronic catalog. The Smithsonian stated that it does not believe that the NMAI Act—either as originally enacted or after the 1996 amendments— requires cultural affiliations included in the inventories to necessarily be conclusive and dispositive. The second step is to prepare repatriation case reports (the identification). During the second step, the museums generally consult with tribes and consider all relevant information, including information held by the Smithsonian as well as other information needed to meet the NMAI Act’s requirement that the Smithsonian use the best available scientific and historic documentation to identify the origins of remains and funerary objects, according to officials. Generally, each case report prepared by the museums includes a determination of cultural affiliation and a recommendation regarding repatriation, according to officials. The officials told us that the museums generally undertake the second step only after a tribe submits a repatriation claim based on information in the inventories. The legislative history of the 1996 amendments provides little clear guidance concerning the meaning of section 11. The congressional committee report accompanying the 1996 amendments notes that the amendments were entirely consistent with the Smithsonian’s then-current administrative practice and adopted the Smithsonian’s administrative deadline of June 1, 1998, to complete an inventory of Indian human remains and funerary objects in its possession. This suggests that the 1996 amendments ratified the Smithsonian’s two-step approach to inventory and identification. The committee report, however, also notes that one intent of the amendments was to ensure that the requirements for the inventory, identification, and repatriation of human remains and funerary objects in the possession of the Smithsonian was being carried out in a manner consistent with NAGPRA, which suggests that the Smithsonian should have included geographic and cultural affiliations in its inventory to the extent practicable based on information held by the Smithsonian. Had the Smithsonian implemented the latter interpretation, it would have faced serious challenges in conducting the required consultations and research necessary to make the required cultural affiliations within the statutory deadlines, given the resources devoted to the task. Natural History Museum staff told us that they could not have reviewed all relevant information when preparing the inventories because they did not have time to do so by the deadline. We recognize the dilemma that the Smithsonian faced; it had to either prepare incomplete inventories by the deadline or prepare complete inventories and miss the deadline. Either approach would have resulted in questions about compliance with the NMAI Act. In addition, Smithsonian officials believe that only the first step of the two-step process was required to be completed within the deadline. Therefore, under this interpretation, the Smithsonian does not have a statutory deadline to complete the remaining consultations and make the remaining cultural affiliation determinations. The congressional committee reports accompanying the 1989 act indicate that the Smithsonian estimated that the identification and inventory of Indian human remains as well as notification of affected tribes and return of the remains and funerary objects would take 5 years. However, more than 21 years later, these efforts are still under way. From the passage of the NMAI Act in 1989 through December 2010, the Smithsonian estimates that it has offered to repatriate the Indian human remains in approximately one-third (about 5,280) of the estimated 19,780 catalog numbers identified as possibly including Indian human remains since the act was passed. The American Indian Museum offered to repatriate human remains in about 40 percent (about 250) of its estimated 630 catalog numbers. The Natural History Museum has offered to repatriate human remains in about 25 percent (about 5,040) of its estimated 19,150 catalog numbers containing Indian human remains. The Smithsonian has also offered to repatriate more than 212,000 funerary objects from about 3,460 catalog numbers and about 1,240 sacred objects and objects of cultural patrimony from about 1,050 catalog numbers through 2010 (see table 3). We could not determine what share of the total this represents because the Smithsonian cannot provide a reliable estimate of the number of funerary objects in its collections and, for sacred objects and objects of cultural patrimony, the Smithsonian relies on tribes to assist in identifying such objects. The Smithsonian generally makes repatriation decisions based on the case reports prepared by case officers at each museum. At the Natural History Museum, the Secretary of the Smithsonian has delegated authority for making decisions to the Under Secretary for Science; at the American Indian Museum the decision is made by the Board of Trustees. Through December 31, 2010, case officers had completed 76 case reports at the American Indian Museum and 95 at the Natural History Museum. Case reports vary in scope and complexity, and therefore the length of time necessary to complete them varies. Both museums’ Repatriation Managers provided estimates for how long case reports should take to complete (18 months for the American Indian Museum, on average, and at least 1 year for the Natural History Museum), but added that time frames can vary greatly depending on the circumstances. Also, they said that these estimates are based on a starting point of when a case officer begins to actively work on a case report. Therefore, their estimates do not include the months or years during which claims may be pending awaiting active consideration. We found that it took a median of 2.4 years for the Smithsonian to complete a case report from the date of an official claim letter to the date of a case report. This varied from 1 month to 18.3 years. Appendix II provides details on the length of time taken by the museums to respond to repatriation claims. According to the Smithsonian’s legal views, case reports need to be detailed in order to meet both the act’s statutory requirements and the Smithsonian’s fiduciary duties. Under the Smithsonian’s legal views, the Smithsonian has an affirmative obligation to prepare inventories and to use the best available scientific and historical documentation to identify the origins of such remains and funerary objects. Accordingly, Smithsonian officials told us that once they had addressed all of the pending requests, they would begin culturally affiliating the human remains and objects still in their collections. In preparing case reports, case officers generally review relevant documentation, including relevant information held by the Smithsonian, and consult with tribes. While the Smithsonian sometimes holds the best available information about its collections, according to officials, case officers sometimes review sources held outside of the Smithsonian as well, such as articles published in journals, state site files, and relevant archival information. In some cases, case officers have traveled to archives across the country to review relevant information, such as notes taken by collectors in the field, according to the Natural History Museum’s Repatriation Manager. The slow progress can be attributed, in part, to the Smithsonian’s view that it has a legal and fiduciary duty to use the best available scientific and historical documentation to determine the cultural affiliation of human remains and objects. The two museums have established internal goals for the number of case reports they will complete in 2011—5 at the Natural History Museum and 4 at the American Indian Museum. However, Smithsonian officials could not estimate when they will complete this process for human remains and funerary objects. At the pace the Smithsonian has been going, it could take decades more to prepare case reports for the remaining human remains and funerary objects in its collections. Officials we spoke with from the Smithsonian, the Review Committee, and the American Indian Museum’s Board of Trustees identified challenges the museums face in carrying out the Smithsonian’s repatriation requirements under the NMAI Act. These challenges fall into four main categories: Limited staff and staff turnover: For example, the Board of Trustees told us that the American Indian Museum’s Repatriation Office is small and has suffered over the years from turnover and vacancies. The Natural History Museum’s Repatriation Manager said that the museum had limited staff to prepare repatriation case reports, which has contributed to the length of time needed to address claims. According to the American Indian Museum’s Repatriation Manager, in one instance the museum was not permitted to fill an open position for a repatriation staff member because of budgetary constraints, and this resulted in over a year of lost research time. Complex or limited information: Repatriation staff told us that complex and sometimes limited records of the Smithsonian’s collections can pose a challenge. For example, the Natural History Museum’s Repatriation Manager told us that records for late 19th and early 20th century archaeological excavations are often incomplete and scattered among record locations at the museum. Furthermore, the manager told us that some collections have been transferred between the Natural History Museum and non-Smithsonian museums and that, in some cases, relevant information in the original records was omitted or simplified during the transfer of items. The American Indian Museum’s Repatriation Manager also told us that complex and sometimes limited records of the Smithsonian’s collections can pose a challenge, but added that the museum lacks information on the origin of only a few human remains and funerary objects in its collections. Difficulties overcoming tribal issues: Review Committee and board officials said that tribes’ limited resources for repatriation activities and turnover in tribal governments can pose challenges. Furthermore, the Review Committee has repeatedly expressed its concerns about whether the Natural History Museum’s repatriation staff are doing enough to reach out to tribes. The committee has recommended several times between 2003 and 2010 that the museum’s Repatriation Office hire a tribal liaison to conduct tribal outreach. The Repatriation Manager said, however, that a tribal liaison is not needed because repatriation staff conduct outreach and have built positive relationships with tribes. Poor data management (American Indian Museum): The American Indian Museum has historically not maintained centralized files related to its repatriation activities, according to the museum’s Repatriation Manager. Instead, staff members at that museum have kept their own separate working files. As a result, repatriation staff have faced difficulties in locating case-related information. To tackle this challenge, the American Indian Museum adopted a new case management system in January 2011 to better organize and track its repatriation activities. The new system will allow the museum to store extensive amounts of case-related data in a centralized system and, for example, allow the museum to more quickly respond to inquiries about repatriation cases, according to the museum’s Repatriation Manager. The Review Committee conducts numerous activities to implement the special committee provisions in the NMAI Act, but we found its oversight and reporting are limited, and it faces some challenges in fulfilling its requirements under the NMAI Act. Contrary to the NMAI Act, the Review Committee does not monitor and review the American Indian Museum’s inventory, identification, and repatriation activities, although it does monitor and review the Natural History Museum’s inventory, identification, and repatriation activities. The Review Committee also does not submit reports to Congress on the progress of repatriation activities at the Smithsonian. In addition, although the Review Committee has heard few disputes, no independent appeals process exists to challenge the Smithsonian’s cultural affiliation and repatriation decisions. Finally, the Review Committee identified challenges it faces in fulfilling its requirements under the NMAI Act. Section 12 of the NMAI Act requires the Secretary of the Smithsonian to appoint a special committee to monitor and review the inventory, identification, and return of Indian human remains and objects under the act. The law does not limit the applicability of the Review Committee to the Natural History Museum. The Secretary nevertheless established a Review Committee to meet this requirement in 1990 that oversees only the Natural History Museum’s repatriation activities and is housed within that museum. According to the Smithsonian’s legal views, it interprets the act as limiting the Review Committee’s oversight of repatriation activities to the Natural History Museum’s repatriation activities. The Smithsonian’s five reasons for its position, along with our response, are presented below. The NMAI Act only covered items that the Smithsonian had at the time of enactment in 1989: The Smithsonian’s legal views are that Congress only intended the Review Committee to advise the Smithsonian with respect to the collection of Indian human remains and funerary objects in the possession of the Smithsonian at the time of the NMAI Act’s enactment in 1989. At that time, all such items were all in the collections of the Natural History Museum. The Smithsonian bases this interpretation on the statutory language and a congressional committee report that said one purpose of the act was to provide a process of identification for the human remains of Native Americans that are currently in the possession of the Smithsonian Institution. However, the version of the act that this report accompanied did not become law. The congressional committee report accompanying the version of the act that became law notes that the Smithsonian is to complete an inventory of Indian human remains and funerary objects in the Smithsonian collections which, in due course, will encompass those in the existing Heye collection. Furthermore, section 12 and the act’s legislative history do not indicate that the Review Committee’s jurisdiction is limited to the Natural History Museum, nor do they include any language that would dictate a time when the committee’s jurisdiction should begin. The language of section 12 clearly directs the Secretary to appoint a special committee to monitor and review the inventory, identification, and return of Indian human remains and objects under the NMAI Act. The Review Committee provision in section 12 of the NMAI Act does not address the Heye collection: The Smithsonian’s legal views are that Congress neither addressed nor considered whether the Review Committee’s jurisdiction should extend to human remains and funerary objects obtained through the transfer of the Heye collection because at the time the Smithsonian was not aware that the collection contained human remains or funerary objects. However, the act’s legislative history demonstrates that Congress believed the collection contained human remains and funerary objects because it discussed an inventory of the human remains and funerary objects in the Heye collection in the congressional committee report accompanying the version of the act that became law. The American Indian Museum did not exist at the time of enactment: Since the American Indian Museum did not exist at the time of the act’s enactment in 1989, the Smithsonian’s legal view is that it did not have any collections that could be subject to the act’s repatriation provisions. However, 6 months before the act’s passage, the Museum of the American Indian in New York and the Smithsonian entered into a memorandum of understanding to transfer the museum’s assets to the Smithsonian. When Congress passed the NMAI Act in 1989, it knew that the new American Indian Museum would house the Heye collection. Moreover, the act established the American Indian Museum and therefore it existed as of the date the law was enacted. The American Indian Museum did not exist when the Review Committee began its work: Because the Review Committee by statute was to begin its repatriation review process within 120 days of the act’s passage, the Smithsonian’s legal view is that Congress could not have intended its charge to extend to the American Indian Museum’s collection since the museum did not exist 120 days after the act’s passage. However, section 12 only required the Secretary to appoint the Review Committee 120 days after the act’s passage; section 12 is silent as to when the committee was to begin its work. Moreover, as stated above, the act established the American Indian Museum and therefore it existed as of the date the law was enacted. The NMAI Act provides the Board of Trustees with sole authority over the museum’s collections: The Smithsonian’s legal view is that by granting the American Indian Museum Board of Trustees sole authority over the museum’s new collection, Congress intended for the board to have independent, plenary authority over its collections, subject only to the general policies of the Board of Regents. In the Smithsonian’s legal view, given this intention, Congress would not have provided the Board of Trustees with such broad powers, and, at the same time, cause it to be subject to the oversight of an independent review committee. We asked Smithsonian officials to provide examples of how the Review Committee would interfere with the Board of Trustees’ sole authority if the committee reviewed the American Indian Museum case reports and heard disputes, but none were provided. We therefore believe that the Review Committee’s monitoring and review of the American Indian Museum’s repatriation activities would not interfere with the board’s sole authority over the museum’s collections and, in particular, its policies to repatriate to non-federally recognized tribes and to make cultural affiliations using a “reasonable basis” standard. This is because the Review Committee’s role is only advisory, as acknowledged by the Smithsonian. Even though the Review Committee has not been overseeing the repatriation activities of the American Indian Museum, since its establishment its Board of Trustees has overseen repatriation activities and has taken an active role in the repatriation process. For example, in 1991, the board adopted a repatriation policy that assigned specific authority and responsibility for each aspect of the repatriation process. It has also overseen the activities of the museum’s own Repatriation Office at board meetings. For example, board members told us they review and comment on repatriation case reports, vote to approve each report, sometimes contribute to case reports, and have been involved in inventorying museum collections. In addition, the board has, at times, created a Repatriation Committee composed of a subset of board members to further its oversight of the museum’s repatriation program. No dispute had been presented to the board for resolution through December 31, 2010, but in 2009 it did help resolve a challenge to a repatriation recommendation. Should there be a dispute in the future, the board told us that it plans to rely on its recently adopted process for initiating an ad hoc Special Review Committee to resolve disputes. The process states that a Special Review Committee would be convened by the board’s Repatriation Committee. To fulfill its responsibility under the NMAI Act to monitor and review the inventory, identification, and return of Indian human remains and objects, the Review Committee has performed a number of activities to oversee the Natural History Museum’s repatriation process including the following: Assessing the Natural History Museum’s progress in implementing the act: The committee generally meets twice annually with Repatriation Office staff and sometimes with other museum staff, including management, to discuss the status of ongoing claims and other repatriation activities. During the meetings, case officers report their interactions with tribes as they address the tribes’ claims for the repatriation of objects or human remains. The meetings also allow committee members to review candidates to fill vacant committee seats, discuss the status of personnel in the Repatriation Office, and raise concerns regarding the repatriation process with the office program manager. Reviewing museum case reports: The committee’s reviews of the Repatriation Office’s repatriation case reports are intended to offer an “independent appraisal of whether the case reports provide a fair and objective consideration and assessment of all relevant information,” according to the Review Committee annual report. The committee examines the methodology and information the case officers use during their research, assesses their conclusions, and, if necessary, provides editorial suggestions to clarify and improve the reports. The committee has been provided courtesy copies of some case reports prepared by the American Indian Museum’s Repatriation Office. Reporting annually to the Secretary: The committee’s reports include concerns it has regarding the repatriation process at the Natural History Museum and updates on disputes, or potential disputes, over cultural affiliation. The reports also provide information regarding conferences or workshops the committee has attended or organized and coordination efforts, if any, the Review Committee has had with the American Indian Museum. Hearing and helping resolve disputes: The committee hears disputes brought by tribes and other interested parties regarding repatriation decisions by the Natural History Museum and makes recommendations for resolving these disputes to the Secretary of the Smithsonian. It has heard two such disputes, which we describe later in this report. In a separate case, the committee reported that it was able to avoid a potential dispute when it arranged a consultation with one tribe in Oregon, other potentially affiliated Oregon tribes, and expert consultants after a tribe complained about a case report that did not recommend repatriation to the tribe. According to the Review Committee, the meeting proved to be extremely helpful and provided new information for the Review Committee to consider. As a result, the Repatriation Office decided to rewrite its report on the remains and reassess its recommendation. The human remains and funerary objects were later found to be culturally affiliated with two tribes. Conducting tribal outreach: The committee has a long-standing policy of interacting with Native American communities and relevant organizations. For example, committee members have attended NAGPRA Review Committee meetings and conferences to explain the Smithsonian repatriation process to tribes. The committee also provided support for a 1995 repatriation workshop organized by the American Indian Museum and conducted a survey in 2001 of tribes in California to determine their level of interest in having the Natural History Museum’s Repatriation Office conduct workshops on Smithsonian repatriation. Although section 12 of the NMAI Act requires the Secretary, at the conclusion of the work of the Review Committee, to so certify by report to Congress, there is no annual reporting requirement similar to the one required for the NAGPRA Review Committee. As we stated earlier, in 1989, it was estimated that the Smithsonian Review Committee would conclude its work in about 5 years and cease to exist at the end of fiscal year 1995. Yet the committee’s monitoring and review of repatriation activities at the Natural History Museum has been ongoing since the committee’s establishment in 1990. In fact, the Smithsonian is not required to report annually to Congress outside of the annual budget process, and Smithsonian officials said the Smithsonian has not reported to Congress on repatriation activities on a regular basis since the NMAI Act was enacted. Furthermore, the Board of Trustees of the American Indian Museum does not have a formal reporting process to inform the Secretary of the Smithsonian or the Smithsonian’s Board of Regents of its activities. As a result, over the last 21 years, policymakers have not received regular information to assess the effectiveness of the Smithsonian’s efforts to repatriate the Indian human remains and objects in its collections. We believe that this would be an appropriate role for the Smithsonian’s Review Committee, and would be similar to the role of the NAGPRA Review Committee. As stated above, the Review Committee is also responsible for hearing disputes at the Natural History Museum with respect to the return of Indian human remains or objects and makes nonbinding recommendations to the Secretary of the Smithsonian. Since the Review Committee was established, in 1990, only two disputes have been brought before it. In 1995, a tribe disputed the Natural History Museum’s Repatriation Office’s finding that the human remains and funerary objects it claimed were culturally unidentifiable and recommendation that they be held until the museum could determine their cultural affiliation. The Review Committee reviewed the case report, requested written summaries of the positions of the museum and the tribe, heard testimony presented by the museum and the tribe, and recommended that the human remains be repatriated to the tribe and that other potentially affiliated tribes be notified of the decision. Several of the notified tribes disputed the repatriation recommendation, and the tribes reached an agreement to jointly repatriate the human remains and funerary objects. In the end, the Secretary decided to implement the committee’s recommendation and repatriated the remains and funerary objects to the requesting tribe and also to four additional tribes in 1997. In 2009, a tribal group disputed the Repatriation Office’s finding that two items it had claimed were not culturally affiliated with the tribes within the group, and that there was insufficient evidence to determine that four additional items met the statutory definition of sacred objects and objects of cultural patrimony. The Review Committee again reviewed the case report, requested position statements from the tribal group and the museum, and heard testimony and unanimously agreed that the Natural History Museum’s cultural affiliation determination was incorrect for the two items and that all six items met the statutory definition of sacred objects and objects of cultural patrimony. On the basis of its review of this evidence, the Review Committee recommended to the Secretary that the items be offered for repatriation. However, the Under Secretary for Science decided that the group had not presented sufficient evidence to establish by the required legal standard that the items met the statutory definitions or were culturally affiliated with any tribe in the group, so the Smithsonian would retain the items. In the letter informing the group of its decision, the Under Secretary stated that although he respected the Review Committee’s recommendation and understood why the committee “may have given more weight to general assertions provided by tribal leaders,” the tribal group had not given sufficient evidence to prove its claim. An official from the tribal group involved in the second dispute told us that the tribal group has considered challenging the Secretary’s decision, but it has no recourse because the Smithsonian does not have an appeals process and cannot be sued in federal court for the decision. The American Indian Museum’s Board of Trustees, which makes final repatriations decisions for that museum, established an appeals process in 2010 whereby, in the event of a dispute, the board would appoint five individuals to an ad hoc Special Review Committee to hear the dispute. However, this process lacks independence, because it relies on decision makers overseeing their own decisions. The Smithsonian also cannot be sued under the NMAI Act or the Administrative Procedures Act, the law commonly used to sue federal agencies. Currently, the Smithsonian Board of Regents is the only entity within the Smithsonian organization that has the authority to oversee the decisions of both the Secretary and the NMAI Board of Trustees, but there is no existing process to appeal these decisions to the Board of Regents. In contrast, under NAGPRA, tribes can use the Administrative Procedures Act or section 15 of NAGPRA to challenge a federal agency’s repatriation decision if they believe it violates the act. The Review Committee has identified two challenges it faces to implementing its responsibilities under the NMAI Act. First, the Review Committee has documented its inability to oversee the American Indian Museum’s repatriation activities as a challenge. The relationship between the Review Committee and the American Indian Museum has been mixed. Since its establishment, the Review Committee has maintained that the NMAI Act mandates a single review committee for monitoring repatriation activities at all museums and units of the Smithsonian Institution. The American Indian and Natural History Museums do coordinate on some issues, such as conducting tribal consultations and providing funding for consultation and repatriation expenses for tribes. However, the American Indian Museum’s board has consistently stressed its independence from the Review Committee with regard to monitoring the repatriation process. According to the Review Committee, there has been no direct communication between the committee and the Board of Trustees as of December 31, 2010. According to the Review Committee’s annual reports, it has taken steps to reach out to the American Indian Museum and offer some oversight of its repatriation program. For example, during the late 1990s, the Review Committee’s annual reports indicate that the committee requested, received, and reviewed courtesy copies of some American Indian Museum case reports. The committee suggested that the Natural History Museum’s Repatriation Office coordinate more closely with the American Indian Museum, along with other Smithsonian museums, and other institutions to help ensure consistency in repatriation policy. The Review Committee also requested that it be much more involved in the American Indian Museum’s repatriation process to meet its mandate. In its 2000 annual report, the committee informed the Secretary that it had met with resistance in trying to monitor the American Indian Museum’s repatriation activities, emphasizing its belief that its mandate encompassed the repatriation activities of the museum. The committee further stated that if it could not perform these duties, the American Indian Museum would continue to be “the only museum in the United States that receives federal funding and not subject to a monitoring of its repatriation activities by an independent committee without a direct interest in activities other than repatriation.” The Review Committee also reported in 2005 and 2007 that it had conducted little or no monitoring of the American Indian Museum’s repatriation activities. However, dialogue has opened up recently between the two museums, with potential for the relationship to expand, according to the Chair of the Review Committee. Furthermore, according to the Review Committee, the current Directors of the American Indian and Natural History Museums have expressed interest in establishing a more collaborative relationship between the two museums’ repatriation programs. The second challenge identified by the Review Committee is a lack of consistent administrative support. The committee has experienced two lengthy instances during which it did not have a coordinator, a position that handles a variety of tasks, including arranging biannual meetings (travel, reimbursements to members), drafting minutes of the meetings (on which the annual reports to the Secretary are largely based), and managing the process for filling open seats on the committee. In the first instance, in July 2005, the coordinator resigned and the Review Committee operated without a coordinator until October 2006. In its 2005 annual report, the Review Committee stated its concern over the length of time it took to fill this position and the negative effect that not having administrative support had on its work. For example, the committee stated that without a coordinator, it was not possible for it to prepare formal minutes for meetings in 2006. Instead, a brief outline of the meeting was recorded after a new coordinator was hired in October 2006. In the second instance, according to the Natural History Museum’s Repatriation Manager, the coordinator was released by the Smithsonian in December 2007 because of a reduction in workforce at the Smithsonian. A museum employee was transferred to the coordinator position that same month but later resigned in February 2008, and the Smithsonian did not hire a new coordinator until March 2009, resulting in an additional year without a coordinator. Although the 2007 meeting minutes had been transcribed by the time the new coordinator had been hired, as of December 31, 2010, the coordinator was in the process of preparing the minutes for 2009 and 2010. There are also no minutes for the 2008 meetings, and the recordings for those meetings have not yet been transcribed. The committee has said that not having a coordinator from 2008 to 2009 made it difficult for it to maintain documentation of its activities and make appropriate logistical arrangements necessary for the committee to function. According to Smithsonian officials, during the time that the Review Committee was without a coordinator, its travel, reimbursement, meeting arrangements, and the process for filling open seats were facilitated by museum staff in coordination with the Review Committee. Smithsonian officials added that they offered to pay for transcription of meeting minutes, but the Review Committee decided to wait until a coordinator was in place to transcribe the tapes. The Smithsonian estimates that, of the items offered for repatriation, it has repatriated about three-quarters of the Indian human remains, about half of the funerary objects, and almost all the sacred objects and objects of cultural patrimony. Some items have not been repatriated for a variety of reasons, including tribes’ lack of resources, cultural beliefs, and tribal government issues. In addition, the Smithsonian has not repatriated some human remains and funerary objects that it has determined to be culturally unidentifiable, and it does not have a policy on how it will undertake the ultimate disposition of these items. The Smithsonian estimates that, of the items offered for repatriation, as of December 31, 2010, it has repatriated about three-quarters (4,330) of the Indian human remains, about half (99,550) of the funerary objects, and nearly all (1,140) sacred objects and objects of cultural patrimony (see table 4). Officials from several tribes that we spoke with, that had repatriation experiences with the American Indian and Natural History Museums, expressed overall satisfaction with how the Smithsonian facilitated the return of human remains and objects once offered for repatriation. An official with one tribe told us that museum staff provided guidance for submitting the repatriation claim, such as an example of a claim letter to use as a template for his tribe’s official request for the human remains. Officials with other tribes told us they appreciated that the museum staffs showed understanding of the tribes’ cultural requirements by taking great care to properly handle and transfer the human remains to a burial site. An official from one tribe described how the museum provided special training in addition to coordinating the repatriation activities. In two other instances, tribal officials said some museum staff attended repatriation ceremonies. Officials from several tribes we spoke with also said they had received funding that assisted them in carrying out repatriation activities with the museums. Many successful repatriations have occurred, but approximately 1,650 human remains, 112,670 funerary objects, and 100 sacred objects offered for repatriation have not been repatriated. Tribes have either not repatriated these items or generally not pursued repatriation because of their lack of resources, cultural beliefs, tribal government issues, the time needed for intertribal coordination, and need for pesticide testing. Lack of resources: Officials from two tribes told us that, at times, their tribes have lacked the necessary staff to facilitate the return of human remains and funerary objects affiliated to them. Officials from two other tribes said that their tribes did not have an appropriate location to serve as a final resting place for the items offered for return, so they have been unable to proceed with the repatriation process. Cultural beliefs: In some cases, tribal cultural beliefs prevent repatriation. For example, one tribal official told us that repatriation can have harmful effects on the tribe, including the deceased tribal members associated with the remains or objects. In another instance, one working group of four tribes said that because it has an ongoing dispute with the Natural History Museum, it will not repatriate offered items because the dispute has created a situation where it is spiritually too dangerous for the tribes to deal with the human remains and funerary objects that have been offered for repatriation. Tribal government issues: In one case, a tribe had a change in leadership that effectively halted any repatriation efforts. In another case, a tribal official told us that the tribe was experiencing political turmoil, and as a result, it was not a good time for the tribe to make decisions, such as deciding to apply for a repatriation grant. Time needed for intertribal coordination: According to museum officials, in a number of cases, the museums have offered the same items to multiple tribes, and time is needed for those tribes to coordinate and determine the disposition of the items. In another case, human remains were offered to one tribe, but a tribal official explained that the tribe needed time to coordinate with other tribes closely linked to the tribe’s ancestral homeland to determine an appropriate burial site. Need for pesticide testing: The American Indian Museum Repatriation Manager told us that, in the 1990s, the museum offered 96 objects to one tribe as sacred objects but these have not been repatriated because of the possibility of pesticide contamination. The manager said that because the museum lacked the necessary technology to test the objects for pesticides at the time, the tribe placed a moratorium on this repatriation until the museum could provide adequate assurances that the objects were safe to handle. In these particular situations where the tribes have not yet repatriated items offered to them, the American Indian Museum Repatriation Manager said that the museum will maintain stewardship of the items or pursue other options. For example, in cases where tribes do not pursue repatriation, the museum may ask whether the tribe is amenable to having other tribes repatriate the items. The Natural History Museum’s Repatriation Program Manager said that on multiple occasions, his office has attempted to follow up with tribes to determine if they are ready to repatriate human remains and objects offered to them, and plans to wait for these tribes to respond. The NMAI Act requires the Smithsonian, upon request, to repatriate culturally affiliated Indian and Native Hawaiian human remains and funerary objects. The act does not discuss how to handle human remains and objects that cannot be culturally affiliated, otherwise referred to as culturally unidentifiable items. Both museums have repatriation policies, but neither policy addresses culturally unidentifiable items. In contrast, a recent NAGPRA regulation that took effect in May 2010 requires, among other things, federal agencies and museums to consult with federally recognized Indian tribes and Native Hawaiian organizations from whose tribal or aboriginal lands the remains were removed before offering to transfer control of the culturally unidentifiable human remains. We found that both museums could not culturally affiliate some items, but they have treated these items differently. Natural History Museum officials stated that about 340 human remains and about 310 funerary objects are culturally unidentifiable and will be retained by the museum until additional information can be used to determine affiliation. In contrast, the Repatriation Manager at the American Indian Museum stated that the museum cannot always determine the cultural affiliation for human remains and associated funerary objects in its collection; however, through consultation many of these cases have been resolved by tribes stepping forward and serving a custodial role in the respectful treatment and disposition of these items. The manager further stated that the American Indian Museum’s philosophy is to ultimately not have any human remains or associated funerary objects within its collection, and the Repatriation Office will continue consulting with tribes and researching viable options regarding the respectful treatment and disposition of all human remains and associated funerary objects within its collection. Furthermore, according to the Chair of the Board of Trustees’ Repatriation Committee, the highest priority of the board is the expeditious return of all human remains and associated funerary objects in the museum’s collection to culturally affiliated entities regardless of geography or sociopolitical borders. Museum policies and Smithsonian officials state that, although not required to, the Smithsonian generally looks to NAGPRA and the NAGPRA regulations as a guide to its repatriation process, where appropriate. However, in a May 2010 letter commenting on the NAGPRA regulation on disposition of culturally unidentifiable remains, the Directors of the American Indian and Natural History Museums cited overall disagreement with the regulation, suggesting that it “favors speed and efficiency in making these dispositions at the expense of accuracy.” The Directors also described the potential for remains to be transferred to communities other than the communities of origin based on the geographic parameters outlined in the regulation. They noted that such transfers could affect the working relationships that the museums’ staff develop with tribe members. Furthermore, they stated that reaching out to tribes to offer remains that were located on their current or historical land is not an ideal approach because tribes submit repatriation requests when they are ready to engage in repatriation activities. Contacting tribes in the manner outlined in the recent NAGPRA regulation, according to the Directors, could push certain tribes into repatriation claims that they may not be capable of facilitating and affect the working relationships that the museums’ staff develop with tribe members. During our review, we spoke to officials from two tribes interested in receiving items the Smithsonian has determined to be culturally unidentifiable. One tribal official believes that all Native Americans are brothers and therefore all Indian human remains should be offered for repatriation to a requesting tribe based on this belief alone. In addition, the American Indian Museum’s Board of Trustees told us that one tribe has come forward and offered to take custody of all human remains the museum has determined to be culturally unidentifiable, and rebury them on a special plot on its reservation. In the absence of a Smithsonian policy for these human remains and objects, the Smithsonian’s actions in handling culturally unidentifiable items lack transparency for both tribes and policymakers. Tribes don’t know how culturally unidentifiable items are to be handled, and they cannot hold the Smithsonian accountable to a particular policy. Officials from both museums, however, suggested that the number of culturally unidentifiable Indian human remains in their collections could decrease as technology improves to provide new evidence of cultural affiliation, at which point the Smithsonian could have the data necessary to determine a cultural affiliation. The Smithsonian has inventoried, identified, and repatriated thousands of Indian human remains. This represents important progress toward fulfilling one of the nation’s important duties to its Native people. However, at the rate that the Smithsonian is identifying and culturally affiliating the human remains and objects in its collections, it may take decades more for it to complete this process. This process is lengthy in part because the Smithsonian believes it must base every cultural affiliation decision on the best available scientific and historical documentation because of its legal and fiduciary duties. The current process is time consuming and resource intensive, which means that the Smithsonian spends time and resources to make determinations when, in some cases, it may be possible to make quicker determinations. In addition, the approach that the Smithsonian has taken to establish a Review Committee to monitor and review inventory, identification, and return of Indian human remains and objects does not provide the oversight specified in section 12 of the NMAI Act. The act gives the Review Committee jurisdiction over all Smithsonian museums, and the Smithsonian’s reasons for limiting its jurisdiction to the Natural History Museum are unpersuasive. Because the Review Committee is only advisory and does not set policy or make binding decisions, we believe that it could monitor and review the American Indian Museum’s repatriation activities without interfering with the sole authority of its Board of Trustees. Moreover, because the Review Committee is not required to report on Smithsonian repatriation activities annually to Congress, like the NAGPRA Review Committee, Congress continues to lack information on the progress the Smithsonian is making in implementing the NMAI Act. Congress has received little information on the Smithsonian’s progress over the last 21 years, and given the amount of additional time the Smithsonian is likely to need to fulfill its repatriation responsibilities, there is no mechanism for Congress to receive regular progress reports in the future. Also, at the Smithsonian, there is no independent administrative appeals process for tribes that believe the decisions by the Secretary or the Board of Trustees do not satisfy the NMAI Act’s requirements. Given that the Administrative Procedures Act does not apply to the Smithsonian, judicial review may not be practical. Currently, the Smithsonian’s Board of Regents is the only body whose purview includes oversight of the decisions made by the Secretary of the Smithsonian as well as the American Indian Museum’s Board of Trustees. Without an independent appeals process, tribes have no way of holding the Secretary and the Board of Trustees accountable for repatriation decisions. Finally, the NMAI Act requires the Smithsonian to, upon request, repatriate culturally affiliated Indian and Native Hawaiian human remains and objects, but it is silent on the treatment of items the Smithsonian cannot culturally affiliate. The Smithsonian has not yet clearly articulated its plans for these culturally unidentifiable items. In the absence of such plans, the final disposition of these items is not clear. Tribes or other interested parties thus have no way to hold the Smithsonian accountable for decisions about how or when to retain or repatriate these items. Congress may wish to consider ways to expedite the Smithsonian’s repatriation process including, but not limited to, directing the Smithsonian to make cultural affiliation determinations as efficiently and effectively as possible. We are recommending that the Smithsonian Institution’s Board of Regents take the following four actions. Direct the Secretary of the Smithsonian to expand the Review Committee’s jurisdiction to include the American Indian Museum, as required by the NMAI Act, to improve oversight of Smithsonian repatriation activities. With this expanded role for the Review Committee, the Board of Regents and the Secretary should also consider where the most appropriate location for the Review Committee should be within the Smithsonian’s organizational structure. Through the Secretary, direct the Review Committee to report annually to Congress on the Smithsonian’s implementation of its repatriation requirements in the NMAI Act to provide Congress with information on the Smithsonian’s repatriation activities. Establish an independent administrative appeals process for Indian tribes and Native Hawaiian organizations to appeal decisions to either the Board of Regents or another entity that can make binding decisions for the Smithsonian Institution to provide tribes with an opportunity to appeal cultural affiliation and repatriation decisions made by the Secretary and the Board of Trustees. Direct the Secretary and the American Indian Museum’s Board of Trustees to develop policies for the Natural History and American Indian Museums for the handling of items in their collections that cannot be culturally affiliated to provide for a clear and transparent repatriation process. We provided a copy of this report for review and comment to the Smithsonian Institution. In its written comments, the Smithsonian agreed with the report’s findings and recommendations and identified actions that it plans to consider to respond to our recommendations. The Smithsonian’s written comments are reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Smithsonian, and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix details the methods we used to examine the Smithsonian Institution’s implementation of the repatriation requirements in the National Museum of the American Indian Act (NMAI Act). We were asked to determine 1. the extent to which the Smithsonian has fulfilled its repatriation requirements and what challenges it faces, if any, in fulfilling its requirements; 2. how the special review committee provisions in the NMAI Act have been implemented and the challenges the committee faces, if any, in fulfilling its requirements; and 3. the number of human remains and objects that have been repatriated and the reasons for those that have not. For all three objectives, we examined the NMAI Act’s implementation at the two Smithsonian museums with collections subject to the act—the American Indian and Natural History Museums. We reviewed the NMAI Act, the Native American Graves Protection and Repatriation Act (NAGPRA) and its implementing regulations, and the museums’ repatriation policies. We interviewed officials from the museums’ respective repatriation offices and the Smithsonian’s Office of General Counsel on the repatriation process. We obtained the Smithsonian’s legal views on how it interprets the NMAI Act in writing and also received an additional memorandum regarding its legal views. We reviewed museum data on the total number of human remains and objects the museums have had in their collections through December 31, 2010. To check the reliability of these data, we interviewed officials and discussed the methodology used in collecting and maintaining these data. Smithsonian officials told us they face a number of challenges in estimating the total number of Indian human remains and objects. For example, the Natural History Museum’s Repatriation Office Manager and the American Indian Museum’s Curator of Collections Research and Documentation said that their records over time contained different numbers. We also cross-checked the data across multiple source documents and tried to reconcile any differences through discussions with museum staff. Given the challenges with the data, we present the numbers in the report as estimates and we rounded them to the nearest ten. The use of rounding did not materially affect our findings, conclusions, and recommendations because of the large number of human remains and objects. We believe that the data are sufficiently reliable to accurately portray broad trends showing the Smithsonian’s progress in implementing the NMAI Act’s repatriation requirements. In addition, during the course of our review, for all three objectives, we traveled to several locations to attend repatriation conferences and visit with tribes. Wisconsin: We attended the National Association of Tribal Historic Preservation Officers 2010 Annual Conference in Green Bay, Wisconsin, and presented the findings of our July 2010 report on federal agency compliance with NAGPRA. During the conference, we met with several tribes interested in repatriation issues. Oklahoma: We interviewed the Cheyenne Tribe of Oklahoma and the Choctaw Nation of Oklahoma, who have both repatriated human remains from the Natural History Museum. In addition, we attended a NAGPRA conference held in Oklahoma City that included an address by the Director of the American Indian Museum on repatriation activities at that museum. Alaska: In Anchorage, Alaska, we interviewed the Director of the Smithsonian Arctic Studies Center, which is housed within the Alaska State Museum. We also interviewed the Director of the Anchorage Museum, who formerly handled repatriation activities for the Kaw Nation of Oklahoma and also served as the Repatriation Manager of the American Indian Museum. We interviewed an official with the Ukpeagvik Inupiat Corporation who participated in a repatriation with the Natural History Museum. We met with an official from the Native American Rights Fund and one from the Department of the Interior’s Bureau of Land Management’s Alaska State Office to discuss repatriation issues in Alaska. In Fairbanks, Alaska, we attended the Alaska Federation of Natives 2010 Annual Conference. During the conference, we interviewed members of the Native Village of Crooked Creek to discuss their repatriation experiences. Washington, D.C.: We attended the NAGPRA at 20 conference commemorating NAGPRA’s 20th anniversary. As part of this conference, we attended a panel discussion that included the manager of the American Indian Museum Repatriation Office and the current and former managers of the Natural History Museum Repatriation Office. The panel was focused on the differences between the NMAI Act and NAGPRA. In addition to the tribes we interviewed during our site visits, we contacted 10 tribes that had completed repatriations with both the American Indian and Natural History Museums and interviewed 2 of them on their experiences with these museums. To address our first objective, we reviewed museum summaries and inventories to generally determine their contents and if they were prepared within the deadlines in the act. The American Indian Museum prepared inventories in 1993 and 1995 and was able to provide one example of its inventories along with sample cover letters for each year (the American Indian Museum did not prepare separate summaries); the Natural History Museum was able to provide copies of all of its summaries and inventories. We reviewed (1) the American Indian Museum Repatriation Office’s progress reports to the museum’s Board of Trustees, (2) the Natural History Museum Repatriation Office’s progress reports to the Review Committee, and (3) the Review Committee’s annual reports to the Secretary of the Smithsonian and meeting minutes. We obtained and reviewed all repatriation claims submitted to the Smithsonian and analyzed all 171 case reports prepared by repatriation staff at both the American Indian and Natural History Museums to collect information about the museums’ repatriation activities, including the number of catalog numbers considered in each report. Specifically, where it was available in case reports, we collected information that falls into the following three categories: Repatriation claim: For case reports that include information about a repatriation claim for one or more items addressed in the report, we recorded the name of the requesting entity or requesting entities, the first date of contact between the Smithsonian and the requesting tribe, the date of the official claim letter, and the date of the report as well as the date of any amendments or addenda to the report. We also recorded descriptive information in the case reports about factors that affected the timeliness with which the Smithsonian addressed the claim. Culturally unidentified remains and funerary objects: For human remains and funerary objects explicitly identified as culturally unidentified, we recorded the total number of catalog numbers that fall into this category for human remains and funerary objects and, where available, the approximate number of human remains and funerary objects represented by these catalog numbers. Recommendations regarding repatriation: In cases where the case report includes a recommendation that the Smithsonian repatriate human remains or objects to specific tribes or consult with specific tribes regarding the disposition of remains or objects, we recorded the names of those tribes. We also recorded whether or not the case report recommends that any human remains or objects be retained by the Smithsonian. Each case report was reviewed by two analysts independently, answers were recorded, results were compared by a third reviewer, and then any differences were reconciled. Using this information, we calculated the length of time from the date of a tribal claim to the date of a case report using available month and year information, where applicable and when such dates were available. In the couple of instances when case reports had no month, we imputed January. We used the date of the official claim letter as the basis for the report-processing times because information on when the Smithsonian actively started working on each claim was not routinely available. As a result, the processing times include the time that the claims were inactive while they were awaiting active consideration. For the Natural History Museum, we used the date that the NMAI Act was originally enacted—November 28, 1989—for claims submitted prior to that time. For the American Indian Museum, we used the date the museum officially took control of its collections—June 1, 1990—for claims submitted prior to that time. We supplemented the case report review by reviewing all claim letters submitted from enactment through December 2010 to both museums. In the few instances when case reports did not document a claim letter but we found there actually was a claim based on the claim letter review, we added the date to our time frames. We also interviewed officials from the American Indian and Natural History Museums, members of the American Indian Museum’s Board of Trustees and the Review Committee, and tribes who have submitted claims for remains or objects held by the Smithsonian to determine any challenges the Smithsonian faces in implementing the NMAI Act’s repatriation requirements. For purposes of our analysis, intercoder reliability was measured as the percent agreement between the independent coders and a threshold of 70 percent agreement was used as a basis to assess intercoder reliability. Using percent agreement as a measure of intercoder reliability was appropriate in our case since the majority of the variables coded in this exercise are count variables or nominal variables with multiple possible responses where the likelihood of agreement through mere chance is decreased. Thirteen of the 15 items evaluated achieved an acceptable level of agreement between 74 and 97 percent. For the 2 items in which agreement was less than 70 percent, attempts were made to better understand the pattern of errors and reviewers met to discuss and were able to effectively resolve these inconsistencies. Ultimately, most items, including those 2 items, were not systematically reported on, but rather used for anecdotal purposes. For our second objective, we examined the Review Committee charter and bylaws. We analyzed the repatriation offices’ progress reports and Review Committee annual reports, meeting minutes, and other documents. To document the activities and challenges of the Review Committee, we examined comments made by Review Committee members on repatriation case reports, attended portions of two Review Committee meetings in Washington, D.C., in December 2009 and December 2010, and interviewed 6 of the 7 Review Committee members at each meeting. In addition, we received written comments from the full Review Committee. Because the Board of Trustees has performed oversight of the American Indian Museum’s repatriation activities, we interviewed 5 of the 23 board members, 4 of the 8 who make up the board’s Repatriation Committee. We met with these 5 members because they were available to meet in between sessions of a board meeting. We also received written comments from the full board. In addition, we reviewed the Administrative Procedures Act and case law interpreting it. For our third objective, we analyzed museum data as well as specific lists prepared by the museums of the human remains and objects in their collections that were offered for repatriation but never repatriated. We contacted 14 of the 68 tribes or tribal entities to which these human remains and objects were culturally affiliated—8 for the American Indian Museum and 6 for the Natural History Museum—and interviewed 5 of them to determine why the items offered had not been repatriated. The other 9 tribes that we contacted did not respond to our inquiries. We chose tribes in a way to ensure geographic diversity and targeting those with a substantial number of items offered for repatriation. Where items were offered to multiple tribes (of which there were numerous cases), we included at least one of those tribes. We reviewed the repatriation policies of both museums to determine if they covered culturally unidentifiable items. We interviewed Smithsonian officials and both Repatriation Offices to determine if they have a policy for handling culturally unidentifiable items. We interviewed and submitted written questions to both the Review Committee and board about the disposition of culturally unidentifiable items, and we reviewed the Department of the Interior’s regulation on culturally unidentifiable items under NAGPRA. We conducted this performance audit from July 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides (1) overall time frames for completing case reports and factors affecting the time frames and (2) specific details on the processing times for repatriation case reports. The Smithsonian completed 171 case reports from November 28, 1989, through December 31, 2010—at least 126 were completed in response to a claim. The remainder were completed proactively without a claim. For the 41 case reports prepared by the American Indian Museum through 2010 where we identified a claim, we found that it took a median of 1.5 years from the date of an official claim letter to the date a draft case report was submitted to the museum’s Board of Trustees for final approval and a repatriation decision. This varied from 3 months to 8.2 years. For the 85 claim-based case reports prepared by the Natural History Museum through 2010, we found that it took a median of 2.8 years from the date of an official claim letter to the date a final case report was approved by the Secretary of the Smithsonian. This varied from 1 month to 18.3 years. We used the date of the official claim letter as the basis for the report- processing times because information on when the Smithsonian actively started working on each claim was not routinely available. As a result, the processing times include the time that the claims were pending while they were awaiting active consideration. We identified examples of claim letters remaining in the queue awaiting active consideration for months and even years before the museums initiated a case report. Case reports prepared by the Smithsonian ranged in length from a 3-page report about the remains of a single individual to a 631-page report that addressed the remains of more than 1,200 individuals and more than 14,400 funerary objects held by the Natural History Museum. On average, case reports prepared by the Natural History Museum considered 33 catalog numbers, while case reports prepared by the American Indian Museum considered 11 catalog numbers. We identified a number of factors that have affected the length of this process, based on information in the case reports. For example, Repatriation offices were not yet established: Several claims were submitted to the American Indian and Natural History Museums before they established repatriation offices in November 1993 and September 1991, respectively. Staffing changes occurred: We identified examples where staff responsible for preparing the case report left the Smithsonian, resulting in delays to the case report preparation process. Waiting for tribal response: In some cases, the museums did not receive needed responses or information from the requesting tribe in a timely manner. For example, in one case a tribe submitted a claim for human remains held by the Natural History Museum and subsequently told the museum that it was opposed to documentation of the remains and asked that the documentation be halted. The museum sought clarification from the tribe on how to proceed, and about 5 months passed before the tribe agreed to allow the museum to continue documentation. Competing claims: The Natural History Museum gives priority to claims for named individuals. In some cases, a tribe may submit a claim for all human remains and objects potentially affiliated to it, then, later on, a lineal descendant may submit a competing claim. In those cases, the museum may halt its work on the original claim to work on the latter claim. Museum priorities: Previously, the museums prioritized claims for human remains, resulting in some delays in addressing claims for sacred objects and objects of cultural patrimony. Currently, the Natural History Museum prioritizes claims for named individuals, but otherwise both museums address claims in the order they are received. In a couple of cases, the museums expedited the case report process and, in other cases, the museum conducted a significant amount of work before receiving an official claim, which may have reduced the length of time needed to complete the report. For example, in one instance, the Natural History Museum agreed to a tribal request to expedite the repatriation process so that the tribe could complete the process at the same time it completed repatriations from the National Park Service. Furthermore, both museums have proactively initiated research and produced case reports about some of the human remains in their collections and, in some instances, later received claims for these human remains. These case reports are included in the time frames provided below. Table 5 shows the specific details on the processing times for repatriation case reports completed through December 31, 2010. In addition to those named above, Jeffery D. Malcolm, Assistant Director; Pamela Davidson; Emily Hanawalt; Cheryl Harris; Rich Johnson; Mark Keenan; Sandra Kerr; Anita Lee; Ruben Montes de Oca, Ben Shouse; and Jeanette M. Soares made key contributions to this report. | The National Museum of the American Indian Act of 1989 (NMAI Act), as amended in 1996, generally requires the Smithsonian Institution to inventory and identify the origins of its Indian human remains and objects placed with them (funerary objects) and repatriate them to culturally affiliated Indian tribes upon request. It also creates a special committee to oversee this process. According to the Smithsonian, two of its museums--the American Indian and the Natural History Museums-- have items that are subject to the act. GAO was asked to determine (1) the extent to which the Smithsonian has fulfilled its repatriation requirements, (2) how the special committee provisions have been implemented, and (3) the number of human remains and objects that have been repatriated and reasons for any that have not. GAO reviewed museum records, including 171 repatriation case reports, and interviewed Smithsonian, Repatriation Review Committee, and tribal officials. Since the NMAI Act was enacted, in 1989, more than 21 years ago, the Smithsonian has offered to repatriate over 5,000 human remains, which account for approximately one-third of the total estimated human remains in its collections. The Smithsonian has also offered to repatriate over 212,000 funerary objects, but the extent of progress is unknown because the Smithsonian has no reliable estimate of the total number of such objects in its collections. The Smithsonian generally makes repatriation decisions based on detailed case reports, and had completed 171 case reports as of December 31, 2010. Developing these case reports is a lengthy and resource-intensive process, in part because the NMAI Act generally requires the Smithsonian to use the best available scientific and historical documentation to identify the origins of its Indian human remains and funerary objects. The Smithsonian originally estimated that the repatriation process would take about 5 years; however, at the pace that it is progressing, GAO believes it could take several more decades to complete this process. In response to the special committee requirements of the NMAI Act, the Smithsonian established a Repatriation Review Committee to monitor and review the Natural History Museum's repatriation activities. Although the Smithsonian believes Congress intended to limit the committee's jurisdiction to the Natural History Museum, the statutory language and its legislative history do not support that view. Since it was established, the committee has provided no oversight over the repatriation activities of the American Indian Museum. In addition, GAO found that neither the Smithsonian nor the committee has provided regular information to Congress on the repatriation progress at the Smithsonian. Although this reporting is not required by the act, given the length of time this process has taken and is expected to take in the future, policymakers do not have information that would keep them apprised of the Smithsonian's repatriation efforts. The committee also hears disputes concerning decisions over the return of human remains and objects, but it does not make binding decisions. Moreover, the Smithsonian has no independent administrative appeals process by which tribes who would like to challenge a repatriation decision can seek recourse, and judicial review of the Smithsonian's repatriation decisions may not be practical. Through December 31, 2010, the Smithsonian estimates that, of the items it has offered for repatriation, about three-quarters of the Indian human remains (4,330 out of 5,980) and about half of the funerary objects (99,550 out of 212,220) have been repatriated. The remaining items have not been repatriated for various reasons, including tribes' lack of resources and cultural beliefs. Resources needed include staff to work on repatriations and appropriate locations to rebury or house the items. In addition, the Smithsonian has not repatriated approximately 340 human remains and 310 funerary objects because it has determined that they cannot be culturally affiliated with a tribe, and it does not have a policy on the disposition of these items. The lack of such a policy limits the transparency of the Smithsonian's actions in handling culturally unidentifiable items for both tribes and policymakers. GAO suggests that Congress may wish to consider ways to expedite the Smithsonian's repatriation process, and recommends that the Smithsonian take actions to expand the oversight and reporting role of the special committee, establish an administrative appeals process, and develop a policy for the disposition of culturally unidentifiable items. The Smithsonian agreed with GAO's findings and recommendations. |
EPA is required by the Clean Air Act to conduct reviews of the National Ambient Air Quality Standards (NAAQS) for the six criteria pollutants, including particulate matter, every 5 years. The overarching purpose of such reviews is to determine whether the current standards are sufficient to protect public health and welfare at large, with an adequate margin of safety, given the latest scientific information available at the time of the review. Major steps in the NAAQS process include the following: developing a criteria document that synthesizes new research on health preparing a staff paper that assesses the policy implications of the scientific information in the criteria document, which also discusses possible ranges for air quality standards; and determining whether and how EPA should revise the NAAQS. If EPA decides to revise the NAAQS, the agency proposes the changes in the Federal Register. As part of the federal rule-making process, EPA is to comply with Executive Order 12866, which directs federal agencies to analyze the costs and benefits of proposed and final rules expected to affect the economy by $100 million or more per year. In September 2003, the Office of Management and Budget (OMB) issued its Circular A-4, which presents guidance and best practices and states that agencies should analyze the costs and benefits in accordance with the principles of full disclosure and transparency. Further, in cases such as the particulate matter rule, where expected economic impacts exceed $1 billion annually, Circular A-4 also states that agencies should conduct a comprehensive assessment of key uncertainties in their analyses of costs and benefits, which EPA also refers to as regulatory impact analyses. EPA’s January 2006 regulatory impact analysis presents estimates of the costs and benefits for the proposed particulate matter rule. The focus of the National Academies’ 2002 report was on how EPA estimates the health benefits of its proposed air regulations. To develop such estimates, EPA conducts analyses to quantify the expected changes in the number of deaths and illnesses that are likely to result from proposed regulations. The regulatory impact analyses also estimate the costs associated with implementing proposed air regulations, although, under the Clean Air Act, EPA is not permitted to consider costs in setting health- based standards for the criteria air pollutants, such as particulate matter. Soon after the National Academies issued its report in 2002, EPA staff identified key recommendations and developed a strategy, in consultation with OMB, to apply some of the recommendations to benefit analyses for air pollution regulations under consideration at the time. EPA roughly approximated the time and resource requirements to respond to the recommendations, identifying those the agency could address within 2 or 3 years and those that would take longer. According to EPA officials, the agency focused primarily on the numerous recommendations related to analyzing uncertainty. Both the National Academies’ report and the OMB guidance emphasize the need for agencies to account for uncertainties and to maintain transparency in the course of conducting benefit analyses. Identifying and accounting for uncertainties in these analyses can help decision makers evaluate the likelihood that certain regulatory decisions will achieve the estimated benefits. Transparency is important because it enables the public and relevant decision makers to see clearly how EPA arrived at its estimates and conclusions. In prior work on regulatory impact analyses, we have found shortcomings in EPA’s analyses of uncertainty and the information the agency provides with its estimates of costs and benefits. EPA applied—either wholly or in part—approximately two-thirds of the Academies’ recommendations to its January 2006 regulatory impact analysis and continues to address the recommendations through ongoing research and development. The January 2006 regulatory impact analysis demonstrated progress toward an expanded analysis of uncertainty and consideration of different assumptions. EPA officials cited time and resource constraints, as well as the need to mitigate complex technical challenges, as the basis for not applying other recommendations. According to EPA officials, the agency did not apply some of the more complex recommendations because it had not achieved sufficient progress in the research and development projects under way. The January 2006 regulatory impact analysis on particulate matter represents a snapshot of an ongoing EPA effort to respond to the National Academies’ recommendations on developing estimates of health benefits for air pollution regulations. Specifically, the agency applied, at least in part, approximately two-thirds of the recommendations—8 were applied and 14 were partially applied—by taking steps toward conducting a more rigorous assessment of uncertainty for proposed air pollution regulations by, for example, evaluating the different assumptions about the link between human exposure to particulate matter and health effects and discussing sources of uncertainty not included in the benefit estimates. According to EPA officials, the agency focused much of its time and resources on the recommendations related to uncertainty. In particular, one overarching recommendation suggests that EPA take steps toward conducting a formal, comprehensive uncertainty analysis—the systematic application of mathematical techniques, such as Monte Carlo simulation— and include the uncertainty analysis in the regulatory impact analysis to provide a “more realistic depiction of the overall uncertainty” in EPA’s estimates of the benefits. A number of the other recommendations regarding uncertainty are aimed at EPA’s developing the information and methodologies needed to carry out a comprehensive uncertainty analysis. Overall, the uncertainty recommendations suggest that EPA should determine (1) which sources of uncertainties have the greatest effect on benefit estimates and (2) the degree to which the uncertainties affect the estimates by specifying a range of estimates and the likelihood of attaining them. In response, EPA devoted significant resources to applying an alternative technique called expert elicitation in a multiphased pilot project. The pilot project was designed to systematically obtain expert advice to begin to better incorporate in its health benefit analysis the uncertainty underlying the causal link between exposure to particulate matter and premature death. EPA used the expert elicitation process to help it more definitively evaluate the uncertainty associated with estimated reductions in premature death—estimates that composed 85 percent to 95 percent of EPA’s total health benefit estimates for air pollution regulations in the past 5 years, according to the agency. EPA developed a range of expected reductions in death rates based on expert opinion systematically gathered in its pilot expert elicitation project and provided the results of this supplemental analysis in an appendix to the regulatory impact analysis. However, the National Academies had recommended that EPA merge such supplemental analyses into the main benefit analysis. Moreover, the Academies recommended that EPA’s main benefit analysis reflect how the benefit estimates would vary in light of uncertainties. In addition to the uncertainty underlying the causal link between exposure and premature death that EPA analyzed, other key uncertainties can influence the estimates. For example, there is uncertainty about the effects of the age and health status of people exposed to particulate matter, the varying composition of particulate matter, and the measurements of actual exposure to particulate matter. EPA’s health benefit analysis, however, does not account for these key uncertainties by specifying a range of estimates and the likelihood of attaining them, similar to estimates derived from the expert elicitation addressing causal uncertainty. For these reasons, EPA’s responses reflect a partial application of the Academies’ recommendation. In addition, the Academies recommended that EPA both continue to conduct sensitivity analyses on sources of uncertainty and expand these analyses. In the particulate matter regulatory impact analysis, EPA included a new sensitivity analysis regarding assumptions about thresholds, or levels below which those exposed to particulate matter are not at risk of experiencing harmful effects. EPA has assumed no threshold level exists—that is, any exposure poses potential health risks. Some experts have suggested that different thresholds may exist and the National Academies recommended that EPA determine how changing its assumption—that no threshold exists—would influence the estimates. The sensitivity analysis EPA provided in the regulatory impact analysis examined how its estimates of expected health benefits would change assuming varying thresholds. Another recommendation that EPA is researching and partially applied to the draft regulatory impact analysis concerns alternative assumptions about cessation lags—the time between reductions in exposure to particulate matter and the health response. The National Academies made several recommendations on this topic, including one that EPA incorporate alternative assumptions about lags into a formal uncertainty analysis to estimate benefits that account for the likelihood of different lag durations. In response, EPA has sought advice from its Advisory Council on Clean Air Compliance Analysis on how to address this recommendation and has conducted a series of sensitivity analyses related to cessation lags. EPA is also funding research to explore ways to address lag effects in its uncertainty analysis. According to an EPA official, specifying the probability of different lag effects is computationally complex, and the agency is working to resolve this challenge. In response to another recommendation by the National Academies, EPA identified some of the sources of uncertainty that are not reflected in its benefit estimates. For example, EPA’s regulatory impact analysis disclosed that its benefit estimates do not reflect the uncertainty associated with future year projections of particulate matter emissions. EPA presented a qualitative description about emissions uncertainty, elaborating on technical reasons—such as the limited information about the effectiveness of particulate matter control programs—why the analysis likely underestimates future emissions levels. EPA also applied the Academies’ recommendation on the presentation of uncertainty, which encouraged the agency to present the results of its health benefit analyses in ways that convey the estimated benefits more realistically by, for example, placing less emphasis on single estimates and rounding the numbers. EPA’s regulatory impact analysis presented ranges for some of the benefit estimates. Also, EPA sought to convey the overall uncertainty of its benefit estimates in a qualitative manner by clearly stating that decision makers and the public should not place significant weight on the quantified benefit estimates in the regulatory impact analysis because of data limitations and uncertainties. Another example of EPA’s response to the National Academies’ recommendations involves exploring the various regulatory choices available to decision makers. The Academies recommended that EPA estimate the health benefits representing the full range of regulatory choices available to decision makers. In the particulate matter analysis, EPA presented health benefits expected under several regulatory options targeting fine particulate matter. Citing a lack of data and tools needed to conduct an accurate analysis, EPA did not estimate the benefits expected under the proposed regulatory options for coarse particulate matter but, consistent with the National Academies’ recommendation, presented its rationale for not doing so. Overall, we considered this a partial application of the recommendation. (See app. II for more detail on the recommendations that EPA has applied or partially applied to the draft particulate matter regulatory impact analysis.) EPA did not apply the remaining 12 recommendations to the analysis for various reasons. While EPA applied some recommendations—either wholly or in part—that require additional studies, methodologies, or data to its particulate matter analysis, the agency had not made sufficient progress in addressing others and therefore did not apply them to the analysis. EPA officials viewed most of these recommendations as relevant to its health benefit analyses and, citing the need for additional research and development, emphasized the agency’s commitment to continue to respond to the recommendations. According to a senior EPA official, insufficient resources impeded the agency’s progress in applying the recommendations. This official cited limited availability of skilled staff, time, and other resources to conduct the required analyses and research and development. According to EPA, some of the more complex, long-term recommendations include the following: relying less on simplifying assumptions, such as the assumption that the various components of particulate matter have equal toxicity; conducting a formal assessment of the uncertainty of particulate matter emissions; and assessing the expected reduction of any harmful effects other than air pollution or human health problems. For example, EPA is in the process of responding to a recommendation involving the relative toxicity of components of particulate matter, an emerging area of research that has the potential to influence EPA’s regulatory decisions in the future. Specifically, the agency could, hypothetically, refine national air quality standards to address the potentially varying health consequences associated with different components of particulate matter. The National Academies recommended that EPA strengthen its benefit analyses by evaluating a range of alternative assumptions regarding relative toxicity and incorporate these assumptions into sensitivity or uncertainty analyses as more data become available. EPA did not believe the state of scientific knowledge on relative toxicity was sufficiently developed at the time it prepared the draft regulatory impact analysis to include this kind of analysis. However, EPA is sponsoring research on this issue. For example, EPA is supporting long- term research on the relative toxicity of particulate matter components being conducted by EPA’s intramural research program, its five Particulate Matter Research Centers, and the Health Effects Institute, an organization funded in part by EPA. In addition, an EPA contractor has begun to investigate methods for conducting a formal analysis that would consider sources of uncertainty, including relative toxicity and lag effects. To date, the contractor has created a model to assess whether and how much these sources of uncertainty may affect benefit estimates in one urban area. The National Academies also recommended that EPA incorporate an assessment of uncertainty into the early stages of its benefit analyses by characterizing the uncertainty of its emissions estimates on which the agency is going to base its benefit estimates. While the agency is investigating ways to assess or characterize this uncertainty, EPA did not conduct a formal uncertainty analysis for particulate matter emissions for the draft regulatory impact analysis because of data limitations. These limitations stem largely from the source of emissions data, the National Emissions Inventory, an amalgamation of data from a variety of entities, including state and local air agencies, tribes, and industry. According to EPA, these entities use different methods to collect data, which have different implications for how to characterize the uncertainty. Furthermore, the uncertainty associated with emissions varies by the source of emissions. For example, the analytical methods for evaluating the uncertainty of estimates of emissions from utilities would differ from those for car and truck emissions because the nature of these emissions and the data collection methods differ. In sum, to apply this recommendation, EPA must determine how to characterize the uncertainty of the estimates for each source of emissions before aggregating the uncertainty to a national level and then factoring that aggregation into its benefit estimates. According to EPA officials, the agency needs much more time to resolve the complex technical challenges of such an analysis. EPA officials also noted that the final particulate matter analysis will demonstrate steps toward this recommendation by presenting emissions data according to the level emitted by the different kinds of sources, such as utilities, cars, and trucks. Another recommendation that EPA is researching but did not apply to the draft regulatory impact analysis concerns whether the proposed revisions to the particulate matter standards would have important indirect impacts on human health and the environment. According to an EPA official, the agency could not rule out the possibility that the revisions could have indirect impacts on the environment, such as whether reductions to particulate matter emissions would reduce the amount of particulate matter deposited in water bodies, thereby decreasing water pollution. EPA has considered indirect impacts of air pollution regulations on sensitive water bodies in the past and plans to include a similar analysis in the final particulate matter rule. An agency official further noted that ongoing research about environmental impacts could reveal additional indirect impacts for future analyses. Other recommendations that EPA did not apply to its benefit estimates in the regulatory impact analysis concern issues such as transparency and external review of EPA’s benefit estimation process. For example, the National Academies recommended that EPA clearly summarize the key elements of the benefit analysis in an executive summary that includes a table that lists and briefly describes the regulatory options for which EPA estimated the benefits, the assumptions that had a substantial impact on the benefit estimates, and the health benefits evaluated. EPA did not, however, present a summary table as called for by the recommendation or summarize the benefits in the executive summary. As EPA stated in the particulate matter analysis, the agency decided not to present the benefit estimates in the executive summary because they were too uncertain. Specifically, officials said the agency was not able to resolve some significant data limitations before issuing the draft regulatory impact analysis in January 2006—a deadline driven by the need to meet the court- ordered issue date for the final rule in September 2006. According to EPA officials, EPA has resolved some of these data challenges by, for example, obtaining more robust data on anticipated strategies for reducing emissions, which will affect the estimates of benefits. The officials also said that EPA intends to include in the executive summary of the regulatory impact analysis supporting the final rule a summary table that describes key analytical information. EPA officials also acknowledged other presentation shortcomings, including references to key analytical elements that were insufficiently specific, that officials attributed to tight time frames and the demands of working on other regulatory analyses concurrently. They said they plan to address these shortcomings in the final regulatory impact analysis. Regarding external review, the National Academies recommended that EPA establish an independent review panel, supported by permanent technical staff, to bolster EPA’s quality control measures for its regulatory impact analyses, such as the one for particulate matter. The National Academies noted that peer review of EPA’s regulatory impact analyses would be advantageous when the agency designs and conducts its economic analysis. EPA has not directly addressed this recommendation. According to the Director of the Office of Policy Analysis and Review in EPA’s Office of Air and Radiation, establishing and supporting independent committees is costly, making it important for EPA to take advantage of existing panels rather than set up new ones. Further, an official in the Office of Air and Radiation who oversees the development of regulatory impact analyses said that the cost of reviewing all regulatory impact analyses would be substantial. In this regard, EPA officials identified peer reviews the agency received from its existing independent committees, such as the Clean Air Scientific Advisory Committee and the Advisory Council on Clean Air Compliance. For example, to respond to the Academies’ recommendations about lag effects, EPA sought independent advice on the assumptions it was developing regarding the time between reduced exposure to particulate matter and reductions in incidences of health effects. Finally, EPA officials noted that although the agency does not have each regulatory impact analysis peer reviewed, EPA typically does have the methodologies that will be applied to regulatory impact analyses peer reviewed. (See app. III for more detail on these recommendations and others that EPA did not apply to the draft particulate matter regulatory impact analysis.) While EPA has taken a number of steps to respond to the Academies’ recommendations on estimating health benefits, continued commitment and dedication of resources will be needed if EPA is to fully implement the improvements endorsed by the National Academies. In particular, the agency will need to ensure that it allocates resources to needed research on emerging issues, such as the relative toxicity of particulate matter components; assessing which sources of uncertainty have the greatest influence on benefit estimates; and estimating other benefits, such as environmental improvements. In addition, it is important for EPA to continue to improve its uncertainty analysis in accordance with the Academies’ recommendations. The agency’s draft regulatory impact analysis illustrates that estimates of health benefits can be highly uncertain. In fact, EPA officials viewed these estimates as so uncertain that they chose to not present them in the executive summary of the regulatory impact analysis. While EPA officials said they expect to reduce the uncertainties associated with the health benefit estimates in the final particulate matter analysis, robust uncertainty analysis will nonetheless be important for decision makers and the public to understand the likelihood of attaining the estimated health benefits. According to EPA officials, the final regulatory impact analysis on particulate matter will reflect further responsiveness to the Academies’ recommendations by, for example, providing additional sensitivity analysis and improving the transparency of the regulatory impact analysis by highlighting key data and assumptions in the executive summary. Moreover, these officials emphasized the agency’s commitment to further enhancing the transparency of the analysis by presenting clear and accurate references to the supporting technical documents, which detail the analytical assumptions and describe the data supporting the estimates. To the extent EPA continues to make progress addressing the Academies’ recommendations, decision makers and the public will be able to better evaluate the basis for EPA’s air regulations. We provided a draft of this report to EPA for review. EPA provided technical comments that we incorporated, as appropriate. Officials from the Office of Policy Analysis and Review within EPA’s Office of Air and Radiation noted in their technical comments that the report provides a fair and balanced representation of EPA’s efforts to apply the National Academies’ recommendations to the draft particulate matter regulatory impact analysis. However, these officials also cited progress made in applying the National Academies’ recommendations through analyses of other air programs and through research and development efforts. We note that this report does identify, as appropriate, EPA’s research and development efforts for recommendations EPA did not apply to the draft particulate matter analysis, its plans to apply some additional recommendations to the final particulate matter regulatory impact analysis, and the agency’s responses to recommendations in prior rule- making analyses of air programs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the EPA Administrator and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. We were asked to determine whether and how the Environmental Protection Agency (EPA) applied the National Academies (Academies) recommendations in its estimates of the health benefits expected from the January 2006 proposed revisions to the particulate matter national ambient air quality standards. In response to this objective, we assessed EPA’s response to the Academies’ recommendations and present an overview of the agency’s completed, ongoing, and planned actions addressing the recommendations. To develop this overview, we reviewed EPA’s particulate matter regulatory impact analysis, EPA’s economic analysis guidelines, and Office of Management and Budget (OMB) guidance on regulatory impact analysis. We also analyzed documentation addressing current and future agency efforts to address the recommendations, such as project planning memorandums and technical support documents discussing the application of economic techniques. In addition, we met with senior officials from EPA’s Office of Air and Radiation, which was responsible for developing the proposed rule and analyzing its economic effects, and with officials from EPA’s Office of Policy, Economics, and Innovation to discuss the agency’s responses to the recommendations. We interviewed several experts outside EPA, including (1) the Chair and other members of the National Academies’ Committee on Estimating the Health-Risk-Reduction Benefits of Proposed Air Pollution Regulations, to clarify the basis for their recommendations; and (2) economists at Resources for the Future, to discuss the technical issues underlying the recommendations on uncertainty analysis. While the 2002 National Academies’ report is generally applicable to EPA air pollution regulations, our review focused on the application of the recommendations to the proposed revisions to the particulate matter standards, as requested. Our work focused on broadly characterizing EPA’s progress toward applying the recommendations; we did not evaluate the effectiveness or quality of the scientific and technical actions the agency has taken to apply them. To assess whether and how EPA has made progress in responding to the recommendations, we developed the following recommendation classification continuum: applied, partially applied, and not applied. The applied and partially applied categories refer to completed and initiated actions in EPA’s health benefit analysis of particulate matter that corresponds to components of the National Academies’ recommendations. The not applied category includes recommendations that EPA did not apply when conducting the analysis for the January 2006 particulate matter regulatory impact analysis and identifies those for which ongoing research and development efforts were not far enough along to apply to the particulate matter analysis. We performed our work from January 2006 to July 2006 in accordance with generally accepted government auditing standards. Table 1 provides a summary of the National Academies’ recommendations that EPA has applied or partially applied to its draft regulatory impact analysis (RIA) for particulate matter (PM). This table also provides GAO’s assessment of EPA’s progress in applying each recommendation, in terms of steps EPA has taken thus far to address issues highlighted in the National Academies’ report. The final column characterizes EPA’s comments regarding each recommendation, including, as pertinent, contextual information, potential impediments to application, and intended next steps. Table 2 provides a summary of the National Academies’ recommendations that EPA has not applied to its draft regulatory impact analysis (RIA) for particulate matter (PM). This table provides GAO’s assessment of EPA's progress to date regarding recommendations that required additional research and development, were deemed as not relevant to the PM National Ambient Air Quality Standards (NAAQS) by the agency, or were not included in the draft PM RIA due to time and resource constraints. The final column characterizes EPA’s comments regarding each recommendation, including contextual information, potential impediments to application, justification for not addressing the recommendation, and intended next steps, if applicable. In addition to the contact named above, Christine Fishkin, Assistant Director; Kate Cardamone; Nancy Crothers; Cindy Gilbert; Tim Guinane; Jessica Lemke; and Meaghan K. Marshall made key contributions to this report. Timothy Bober, Marcia Crosse, and Karen Keegan also made important contributions. | A large body of scientific evidence links exposure to particulate matter--a widespread form of air pollution--to serious health problems, including asthma and premature death. Under the Clean Air Act, the Environmental Protection Agency (EPA) periodically reviews the appropriate air quality level at which to set national standards to protect the public against the health effects of particulate matter. EPA proposed revisions to these standards in January 2006 and issued a draft regulatory impact analysis of the revisions' expected costs and benefits. The estimated benefits of air pollution regulations have been controversial in the past. A 2002 National Academies report generally supported EPA's approach but made 34 recommendations to improve how EPA implements its approach. GAO was asked to determine whether and how EPA applied the Academies' recommendations in its estimates of the health benefits expected from the January 2006 proposed revisions to the particulate matter standards. GAO examined the draft analysis, met with EPA officials, and interviewed members of the National Academies' committee. In providing technical comments on the report, EPA officials said it was fair and balanced and noted the agency's progress in addressing recommendations via research and development and other analyses. EPA has begun to change the way it conducts and presents its analyses of health benefits in response to recommendations from the National Academies. Specifically, EPA applied, at least in part, 22--or about two-thirds--of the Academies' recommendations to its health benefit analysis of proposed revisions to particulate matter standards. For example, in response to some of the recommendations, EPA took steps toward conducting a more rigorous assessment of uncertainty by, for instance, evaluating how benefits could change under different assumptions and discussing sources of uncertainty not included in the benefit estimates. In one case, EPA applied an alternative technique, called expert elicitation, for evaluating uncertainty by systematically gathering expert opinion about the uncertainty underlying the causal link between exposure to particulate matter and premature death. Consistent with the National Academies' recommendation to assess uncertainty by developing ranges of estimates and specifying the likelihood of attaining them, EPA used expert elicitation to develop ranges of reductions in premature death expected from the proposed revisions. EPA officials said that ongoing research and development efforts will allow the agency to gradually achieve more progress in applying the recommendations. We note that robust uncertainty analysis is important because estimates of health benefits can be highly uncertain, as the draft regulatory impact analysis for particulate matter illustrates. EPA viewed the estimates in this analysis as so uncertain that it chose not to present them in the executive summary. For various reasons, EPA has not applied the remaining 12 recommendations to the analysis, such as the recommendation to evaluate the impact of using the simplifying assumption that each component of particulate matter is equally toxic. EPA officials viewed most of these recommendations as relevant to its health benefit analyses and, citing the need for additional research and development, emphasized the agency's commitment to continue to respond to the recommendations. For example, EPA did not believe that the state of scientific knowledge on the relative toxicity of particulate matter components was sufficiently developed to include in the January 2006 regulatory impact analysis, and the agency is currently sponsoring research on this issue. In addition, a senior EPA official said that insufficient resources impeded the agency's progress in applying the recommendations, citing, in particular, the limited availability of skilled staff, time, and other resources to conduct the required analyses and research and development. EPA officials also said that some of the recommendations the agency did not apply to the draft analysis, such as one calling for a summary table describing key analytical information to enhance transparency, will be applied to the analysis supporting the final rule. To the extent that EPA continues to make progress addressing the Academies' recommendations, decision makers and the public will be better able to evaluate the basis for EPA's air regulations. |
benefit package, largely designed in 1965, provides virtually no coverage. In 1996, almost one third of beneficiaries had employer- sponsored health coverage, as retirees, that included drug benefits. More than 10 percent of beneficiaries received coverage through Medicaid or other public programs. To protect against drug costs, the remainder of Medicare beneficiaries can choose to enroll in a Medicare+Choice plan with drug coverage if one is available in their area or purchase a Medigap policy. 3 The availability, breadth, and price of such coverage is changing as the costs of expanded prescription drug use drives employers, insurers, and managed care plans to adopt new approaches to control the expenditures for this benefit. These approaches, in turn, are reshaping the drug market. Over the past 5 years, prescription drug expenditures have grown substantially, both in total and as a share of all health care outlays. Prescription drug spending grew an average of 12.4 percent per year from 1993 to 1998, compared with a 5 percent average annual growth rate for health care expenditures overall. (See table 1.) As a result, prescription drugs account for a larger share of total health care spending—rising from 5.6 percent to 7.9 percent in 1998. As an alternative to traditional Medicare fee-for-service, beneficiaries in Medicare+Choice plans (formerly Medicare risk health maintenance organizations) obtain all their services through a managed care organization and Medicare makes a monthly capitation payment to the plan on their behalf. Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications Prescription drug expenditures (in billions) Annual growth in prescription drug expenditures (percent) Annual growth in all health care expenditures (percent) Total drug expenditures have been driven up by both greater utilization of drugs and the substitution of higher-priced new drugs for lower-priced existing drugs. Private insurance coverage for prescription drugs has likely contributed to the rise in spending, because insured consumers are shielded from the direct costs of prescription drugs. In the decade between 1988 and 1998, the share of prescription drug expenditures paid by private health insurers rose from almost a third to more than half. (See fig. 1.) The development of new, more expensive drug therapies— including new drugs that replace old drugs and new drugs that treat disease more effectively—also contributed to the drug spending growth by boosting the volume of drugs used as well as the average price for drugs used. The average number of new drugs entering the market each year rose from 24 at the beginning of the 1990s to 33 now. Similarly, biotechnology advances and a growing knowledge of the human immune system are significantly shaping the discovery, design, and production of drugs. Advertising pitched to consumers has also likely upped the use of prescription drugs. A recent study found that the 10 drugs most heavily advertised directly to consumers in 1998 accounted for about 22 percent of the total increase in drug spending between 1993 and 1998.Between March 1998 and March 1999, industry spending on advertising grew 16 percent to $1.5 billion. All of these factors suggest the need for effective cost control mechanisms to be in place under any option to increase access to prescription drugs. Medicare beneficiaries spent $2,000 or more.A recent report had projected that by 1999 an estimated 20 percent of Medicare beneficiaries would have total drug costs of $1,500 or more—a substantial sum for people lacking some form of insurance to subsidize their purchases or for those facing coverage limits. In 1996, almost a third of Medicare beneficiaries lacked drug coverage altogether. (See fig. 2.) The remaining two-thirds had at least some drug coverage—most commonly through employer-sponsored health plans. The proportion of beneficiaries who had drug coverage rose between 1995 and 1996, owing to increases in those with Medicare HMOs, individually purchased supplemental coverage, and employer-sponsored coverage. However, recent evidence indicates that this trend of expanding drug coverage is unlikely to continue. Medicare+Choice plans have found drug coverage to be an attractive benefit that beneficiaries seek out when choosing to enroll in managed care organizations. However, owing to rising drug expenditures and their effect on plan costs, the drug benefits the plans offer are becoming less generous. Many plans restructured drug benefits in 2000, increasing enrollees’ out-of-pocket costs and limiting their total drug coverage. Beneficiaries may purchase Medigap policies that provide drug coverage, although this tends to be expensive, involves significant cost-sharing, and includes annual limits. Standard Medigap drug policies include a $250 deductible, a 50 percent coinsurance requirement, and a $1,250 or $3,000 annual limit. Furthermore, Medigap premiums have been increasing in recent years. In 1999, the annual premium for one type of Medigap policy with a $1,250 annual limit on drug coverage, ranged from approximately $1,000 to $6,000. All beneficiaries who have full Medicaid benefitsreceive drug coverage that is subject to few limits and low cost-sharing requirements. For beneficiaries whose incomes are slightly higher than Medicaid standards, 14 states currently offer pharmacy assistance programs that provided drug coverage to approximately 750,000 beneficiaries in 1997. The three largest state programs accounted for 77 percent of all state pharmacy assistance program beneficiaries. Most state pharmacy assistance programs, like Medicaid, have few coverage limitations. The burden of prescription drug costs falls most heavily on the Medicare beneficiaries who lack drug coverage or who have substantial health care needs. Drug coverage is less prevalent among beneficiaries with lower incomes. In 1995, 38 percent of beneficiaries with income below $20,000 were without drug coverage, compared to 30 percent of beneficiaries with higher incomes. Additionally, the 1995 data show that drug coverage is slightly higher among those with poorer self-reported health status. At the same time, however, beneficiaries without drug coverage and in poor health had drug expenditures that were $400 lower than the expenditures of beneficiaries with drug coverage and in poor health. This might indicate access problems for this segment of the population. employer-sponsored benefits, Medigap policies, and most recently, Medicare+Choice plans. Although reasonable cost sharing serves to make the consumer a more prudent purchaser, copayments, deductibles, and annual coverage limits can reduce the value of drug coverage to the beneficiary. Harder to measure is the effect on beneficiaries of drug benefit restrictions brought about through formularies designed to limit or influence the choice of drugs. During this period of rising prescription drug expenditures, third-party payers have pursued various approaches to control spending. These efforts have initiated a transformation of the pharmaceutical market. Whereas insured individuals formerly purchased drugs at retail prices at pharmacies and then sought reimbursement, now third-party payers influence which drug is purchased, how much is paid for it, and where it is purchased. A common technique to manage pharmacy care and control costs is to use a formulary. A formulary is a list of prescription drugs, grouped by therapeutic class, that a health plan or insurer prefers and may encourage doctors to prescribe. Decisions about which drugs to include in a formulary are based on the drugs’ medical value and price. The inclusion of a drug in a formulary and its cost can affect how frequently it is prescribed and purchased and, therefore, can affect its market share. Formularies can be open, incentive-based, or closed. Open formularies are often referred to as “voluntary” because enrollees are not penalized if their physicians prescribe nonformulary drugs. Incentive-based formularies generally offer enrollees lower copayments for the preferred formulary or generic drugs. Incentive-based or managed formularies are becoming more popular because they combine flexibility and greater cost-control features than open formularies. A closed formulary limits insurance coverage to the formulary drugs and requires enrollees to pay the full cost of nonformulary drugs prescribed by their physicians. Another way in which the market has been transformed is through the use of pharmacy benefit managers (PBM) by health plans and insurers to administer and manage prescription drug benefits. PBMs offer a range of services, including prescription claims processing, mail-service pharmacy, formulary development and management, pharmacy network development, generic substitution incentives, and drug utilization review. PBMs also negotiate discounts and rebates on prescription drugs with manufacturers. Expanding access to more affordable prescription drugs could involve either subsidizing prescription drug coverage or allowing beneficiaries access to discounted pharmaceutical prices. The design of a drug coverage option, that is, the scope of the benefit, the covered population, and the mechanisms used to contain costs, as well as its implementation will determine the effect of the option on beneficiaries, Medicare or federal spending, and the pharmaceutical market. A new benefit would need to be crafted to balance competing concerns about the sustainability of Medicare, federal obligations, and the hardship faced by some beneficiaries. Similarly, the effect of granting some beneficiaries access to discounted prices will hinge on details such as the price of the drugs after the discount, how discounts are determined and secured, and which beneficiaries are eligible. The relative merits of any approach should be carefully assessed. We suggest that the following five criteria be considered in evaluating any option. (1) Affordability: an option should be evaluated in terms of its effect on public outlays for the long term. (2) Equity: an option should provide equitable access across groups of beneficiaries and be fair to affected providers. (3) Adequacy: an option should provide appropriate beneficiary incentives for prudent utilization, support standard treatment options for beneficiaries, and not impede effective and clinically meaningful innovations. (4) Feasibility: an option should incorporate such administrative essentials as implementation and cost and quality monitoring techniques. (5) Acceptance: an option should account for the need to educate the beneficiary and provider communities about its costs and the realities of trade-offs required by significant policy changes. drug costs, which is yet to be designed. Under the Breaux-Frist approach, competing health plans could design their own copayment structure, with requirements on the benefit’s actuarial value but no provision to limit beneficiary catastrophic drug costs. Benefit cost-control provisions for the traditional Medicare program may present some of the thorniest drug benefit design decisions. Recent experience provides two general approaches. One would involve the Medicare program obtaining price discounts from manufacturers. Such an arrangement could be modeled after Medicaid’s drug rebate program. While the discounts in aggregate would likely be substantial, this approach lacks the flexibility to achieve the greatest control over spending. It could not effectively influence or steer utilization because it does not include incentives that would encourage beneficiaries to make cost-conscious decisions. The second approach would draw from private sector experience in negotiating price discounts from manufacturers in exchange for shifting market share. Some plans and insurers employ PBMs to manage their drug benefits, including claims processing, negotiating with manufacturers, establishing lists of drug products that are preferred because of efficacy or price, and developing beneficiary incentive approaches to control spending and use. Applying these techniques to the entire Medicare program, however, would be difficult because of its size, the need for transparency in its actions, and the imperative for equity for its beneficiaries. As the largest government payer for prescription drugs, Medicaid drug expenditures account for about 17 percent of the domestic pharmaceutical market. Before the enactment of the Medicaid drug rebate program under the Omnibus Budget Reconciliation Act of 1990 (OBRA), state Medicaid programs paid close to retail prices for outpatient drugs. Other large purchasers, such as HMOs and hospitals, negotiated discounts with manufacturers and paid considerably less. The rebate program required drug manufacturers to rebate to state Medicaid programs a percentage off of the average price wholesalers pay manufacturers. The rebates were based on a percentage reduction that reflects the lowest or “best” prices the manufacturer charged other purchasers and the volume of purchases by Medicaid recipients. In return for the rebates, state Medicaid programs must cover all drugs manufactured by pharmaceutical companies that entered into rebate agreements with HCFA. After the rebate program’s enactment, a number of market changes affected other purchasers of prescription drugs and the amount of the rebates that Medicaid programs received. Drug manufacturers substantially reduced the price discounts they offered to many large private purchasers, such as HMOs. Therefore, the market quickly adjusted by increasing drug prices to compensate for rebates obtained by the Medicaid program. Although the states have received billions of dollars in rebates from drug manufacturers since OBRA’s enactment, state Medicaid directors have expressed concerns about the rebate program. The principal concern involves OBRA’s requirement to provide access to all the drugs of every manufacturer that offers rebates, which limits the utilization controls Medicaid programs can use at a time when prescription drug expenditures are rapidly increasing. Although the programs can require recipients to obtain prior authorization for particular drugs and can impose monthly limits on the number of covered prescriptions, they cannot take advantage of other techniques, such as incentive-based formularies, to steer recipients to less expensive drugs. The few cost-control strategies available to state Medicaid programs can add to the administrative burden on state Medicaid programs. Other payers, such as private and federal employer health plans and Medicare+Choice plans, have taken a different approach to managing their prescription drug benefits. They typically use beneficiary copayments to control prescription drug use, and they use formularies to both control use and obtain better prices by concentrating purchases on selected drugs. In many cases, these plans and insurers retain a PBM’s services to manage their pharmacy benefit and control spending. Beneficiary cost-sharing plays a central role in attempting to influence drug utilization. Copayments are frequently structured to influence both the choice of drugs and the purchasing arrangements. While formulary restrictions can channel purchases to preferred drugs, closed formularies, which provide reimbursement only for preferred drugs, have generated substantial dissatisfaction among consumers. As a result, many plans link their cost-sharing requirements and formulary lists. The fastest growing trend today is the use of a formulary that covers all drugs but that includes beneficiary cost-sharing that varies for different drugs—typically a smaller copayment for generic drugs, a larger one for preferred drugs, and an even larger one for all other drugs. Reduced copayments have also been used to encourage enrollees using maintenance drugs for chronic conditions to obtain them from particular suppliers, like a mail-order pharmacy. Plans and insurers have turned to PBMs for assistance in establishing formularies, negotiating prices with manufacturers and pharmacies, processing beneficiaries’ claims, and reviewing drug utilization. Because PBMs manage drug benefits for multiple purchasers, they often may have more leverage than individual plans in negotiating prices through their greater purchasing power. Traditional fee-for-service Medicare has generally established reimbursement rates for services like those provided by physicians and hospitals and then processed and paid claims with few utilization controls. Adopting some of the techniques used by private plans and insurers might help better control costs. However, how to adapt those techniques to the characteristics and size of the Medicare program raises questions. Negotiated or competitively determined prices would be superior to administered prices only if Medicare could employ some of the utilization controls that come from having a formulary and differential beneficiary cost-sharing. In this manner, Medicare would be able to negotiate significantly discounted prices by promising to deliver a larger market share for a manufacturer’s product. Manufacturers would have no incentive to offer a deep discount if all drugs in a therapeutic class were covered on the same terms. Without a promised share of the Medicare market, these manufacturers might reap greater returns from charging higher prices and by concentrating marketing efforts on physicians and consumers to influence prescribing patterns. Implementing a formulary and other utilization controls could prove difficult for Medicare. Developing a formulary involves determining which drugs are therapeutically equivalent so that several from each class can be included. Plans and PBMs currently make those determinations privately—something that would not be possible for Medicare, which must have transparent policies that are determined openly. Given the stakes involved in selecting drugs, one can imagine the intensive efforts to offer input to and scrutinize the selection process. operated in each area, beneficiaries could choose one to administer their drug benefit. This raises questions about how to inform beneficiaries of the differences in each PBM’s policies and whether and how to risk-adjust payments to PBMs for differences in the health status of the beneficiaries using them. Another option before the Congress would allow Medicare beneficiaries to purchase prescription drugs at the lowest price paid by the federal government. Because of their large purchasing power, federal agencies, such as, the Departments of Veterans Affairs (VA) and Defense (DOD), have access to prescription drug prices that often are considerably lower than retail prices. Extending these discounts to Medicare beneficiaries, or some groups of beneficiaries, could have a measurable effect on lowering their out-of-pocket spending, although whether this would adequately increase access or raise prices paid by other purchasers that negotiate drug discounts is unknown. Typically, federal agencies obtain prescription drugs at prices listed in the federal supply schedule (FSS) for pharmaceuticals.FSS prices represent a significant discount off the prices drug manufacturers charge wholesalers.Under the Veterans Health Care Act of 1992, drug manufacturers must make their brand-named drugs available to federal agencies at the FSS price in order to participate in the Medicaid program. The act requires that the FSS price for VA, DOD, the Public Health Service, and the Coast Guard be at least 24 percent below the price that the manufacturers charge wholesalers. competitive basis for specific drugs considered therapeutically interchangeable. These contracts enable VA to obtain larger discounts from manufacturers by channeling greater volume to certain pharmaceutical products. Providing Medicare beneficiaries access to the lowest federal prices could result in important out-of-pocket savings to those without coverage who are paying close to retail prices. However, concerns exist that extending federal discounts to Medicare beneficiaries could lead to price increases to federal agencies and other purchasers since the discount is based on prices determined by manufacturers. Federal efforts to lower Medicaid drug prices demonstrate the potential for this to occur. While it is not possible to predict how federal drug prices would change if Medicare beneficiaries are given access to them, the larger the market that seeks to take advantage of these prices, the greater the economic incentive would be for drug manufacturers to raise federal prices to limit the impact of giving lower prices to more purchasers. The current Medicare program, without improvements, is ill suited to serve future generations of seniors and eligible disabled Americans. On the one hand, the program is fiscally unsustainable in its present form, as the disparity between program expenditures and program revenues is expected to widen dramatically in the coming years. On the other hand, Medicare’s benefit package contains gaps in desired coverage, most notably the lack of outpatient prescription drug coverage, compared with private employer coverage. Any option to modernize the benefits runs the risk of exacerbating the fiscal imbalance of the programs. That is why we believe that expansions should be made in the context of overall program reforms that are designed to make the program more sustainable over the long term. Any discussions about expanding beneficiary access to prescription drugs should carefully consider targeting financial help to those most in need and minimizing the substitution of public funds for private funds. Employers that offer drug coverage through a retiree health plan may choose to adapt their health coverage if a Medicare drug benefit is available. A key characteristic of America’s voluntary, employer-based system of health insurance is an employer’s freedom to modify the conditions of coverage or to terminate benefits. device. It allows the government to track the extent to which earmarked payroll taxes cover Medicare’s HI outlays. In serving the tracking purpose, the 1999 Trustees’ annual report showed that Medicare’s HI component has been, on a cash basis, in the red since 1992, and in fiscal year 1998, earmarked payroll taxes covered only 89 percent of HI spending. In the Trustees’ report, issued in March 1999, projected continued cash deficits for the HI trust fund. (See fig. 3.) F u n d B a la n c e When the program has a cash deficit, as it did from 1992 through 1998, Medicare is a net claimant on the Treasury—a threshold that Social Security is not currently expected to reach until 2014. To finance these cash deficits, Medicare drew on its special issue Treasury securities acquired during the years when the program generates a cash surplus. In essence, for Medicare to “redeem” its securities, the government must raise taxes, cut spending for other programs, or reduce the projected surplus. Outlays for Medicare services covered under Supplementary Medical Insurance (SMI)–physician and outpatient hospital services, diagnostic tests, and certain other medical services and supplies–are already funded largely through general revenues. Although the Office of Management and Budget (OMB) has recently reported a $12 billion cash surplus for the HI program in fiscal year 1999 due to lower than expected program outlays, the long-term financial outlook for Medicare is expected to deteriorate. Medicare’s rolls are expanding and are projected to increase rapidly with the retirement of the baby boomers. Today’s elderly make up about 13 percent of the total population; by 2030, they will comprise 20 percent as the baby boom generation ages and the ratio of workers to retirees declines from 3.4 to 1 today to roughly 2 to 1. Without meaningful reform, the long-term financial outlook for Medicare is bleak. Together, Medicare’s HI and SMI expenditures are expected to increase dramatically, rising from about 12 percent in 1999 to about a quarter of all federal revenues by mid-century. Over the same time frame, Medicare’s expenditures are expected to double as a share of the economy, from 2.5 to 5.3 percent, as shown in figure 4. The progressive absorption of a greater share of the nation’s resources for health care, like Social Security, is in part a reflection of the rising share of elderly population, but Medicare growth rates also reflect the escalation of health care costs at rates well exceeding general rates of inflation. Increases in the number and quality of health care services have been fueled by the explosive growth of medical technology. Moreover, the actual costs of health care consumption are not transparent. Third-party payers generally insulate consumers from the cost of health care decisions. In traditional Medicare, for example, the impact of the cost- sharing provisions designed to curb the use of services is muted because about 80 percent of beneficiaries have some form of supplemental health care coverage (such as Medigap insurance) that pays these costs. For these reasons, among others, Medicare represents a much greater and more complex fiscal challenge than even Social Security over the longer term. When viewed from the perspective of the entire budget and the economy, the growth in Medicare spending will become progressively unsustainable over the longer term. Our updated budget simulations show that to move into the future without making changes in the Social Security, Medicare, and Medicaid programs is to envision a very different role for the federal government. Assuming, for example, that the Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term model shows a world by 2030 in which Social Security, Medicare, and Medicaid increasingly absorb available revenues within the federal budget. Under this scenario, these programs would absorb more than three-quarters of total federal revenue. (See fig. 5.) Budgetary flexibility would be drastically constrained and little room would be left for programs for national defense, the young, infrastructure, and law enforcement. Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications *The “eliminate non-Social Security surpluses” simulation can only be run through 2066 due to the elimination of the capital stock. Revenue as a share of GDP during the simulation period is lower than the 1999 level due to unspecified permanent policy actions that reduce revenue and increase spending to eliminate the non-Social Security surpluses. Medicare expenditure projections follow the Trustees’ 1999 intermediate assumptions. The projections reflect the current benefit and financing structure. term. Assuming no other changes, these programs would constitute an unimaginable drain on the earnings of our future workers. actuarial balance to the HI trust fund. This analysis, moreover, does not incorporate the financing challenges associated with the SMI and Medicaid programs. Early action to address the structural imbalances in Medicare is critical. First, ample time is required to phase in the reforms needed to put this program on a more sustainable footing before the baby boomers retire. Second, timely action to bring costs down pays large fiscal dividends for the program and the budget. The high projected growth of Medicare in the coming years means that the earlier the reform begins, the greater the savings will be as a result of the effects of compounding. The actions necessary to bring about a more sustainable program will no doubt call for some hard choices. Some suggest that the size of the imbalances between Medicare’s outlays and payroll tax revenues for the HI program may well justify the need for additional resources. One possible source could be general revenues. Although this may eventually prove necessary, such additional financing should be considered as part of a broader initiative to ensure the program’s long-range financial integrity and sustainability. What concerns us most is that devoting general funds to the HI trust fund may be used to extend HI’s solvency without addressing the hard choices needed to make the whole Medicare program more sustainable in economic or budgetary terms. Increasing the HI trust fund balance alone, without underlying program reform, does nothing to make the Medicare program more sustainable—that is, it does not reduce the program’s projected share of GDP or the federal budget. From a macroeconomic perspective, the critical question is not how much a trust fund has in assets but whether the government as a whole has the economic capacity to finance all Medicare’s promised benefits—both now and in the future. We must keep in mind the unprecedented challenge facing future generations in our aging society. Relieving them of some of the financial burden of today’s commitments would help preserve some budgetary flexibility for future generations to make their own choices. portion of Medicare, which is projected to grow even faster than HI in coming decades, assuming no additional SMI benefits. The issue of the extent to which general funds are an appropriate financing mechanism for the Medicare program would remain important under financing arrangements that differed from those in place in the current HI and SMI structures. For example, under approaches that would combine the two trust funds, a continued need would exist for measures of program sustainability that would signal potential future fiscal imbalance. Such measures might include the percentage of program funding provided by general revenues, the percentage of total federal revenues or gross domestic product devoted to Medicare, or program spending per enrollee. As such measures were developed, questions would need to be asked about the appropriate level of general revenue funding. Regardless of the measure chosen, the real question would be what actions should be taken when and if the chosen cap is reached. Beyond reforming the Medicare program itself, maintaining an overall sustainable fiscal policy and strong economy is vital to enhancing our nation’s future capacity to afford paying benefits in the face of an aging society. Decisions on how we use today’s surpluses can have wide-ranging impacts on our ability to afford tomorrow’s commitments. As we know, there have been a variety of proposals to use the surpluses for purposes other than debt reduction. Although these proposals have various pros and cons, we need to be mindful of the risk associated with using projected surpluses to finance permanent future claims on the budget, whether they are on the spending or the tax side. Commitments often prove to be permanent, while projected surpluses can be fleeting. For instance, current projections assume full compliance with tight discretionary spending caps. Moreover, relatively small changes in economic assumptions can lead to very large changes in the fiscal outlook, especially when carried out over a decade. In its January 2000 report,CBO compared the actual deficits or surpluses for 1986 through 1999 with the first projection it had produced 5 years before the start of each fiscal year. Excluding the estimated impact of legislation, CBO stated that its errors in projecting the federal surplus or deficit averaged about 2.4 percent of GDP in the fifth year beyond the current year. For example, such a shift in 2005 would mean a potential swing of about $285 billion in the projected surplus for that year. Although most would not argue for devoting 100 percent of the surplus to debt reduction over the next 10 years, saving a good portion of our surpluses would yield fiscal and economic dividends as the nation faces the challenges of financing an aging society. Our work on the long-term budget outlook illustrates the benefits of maintaining surpluses for debt reduction. Reducing the publicly held debt reduces interest costs, freeing up budgetary resources for other programmatic priorities. For the economy, running surpluses and reducing debt increase national saving and free up resources for private investment. These results, in turn, lead to stronger economic growth and higher incomes over the long term. Over the last several years, our simulations illustrate the long-term economic consequences flowing from different fiscal policy paths.Our models consistently show that saving all or a major share of projected budget surpluses ultimately leads to demonstrable gains in GDP per capita. Over a 50-year period, GDP per capita is estimated to more than double from present levels by saving all or most of projected surpluses, while incomes would eventually fall if we failed to sustain any of the surplus. Although rising productivity and living standards are always important, they are especially critical for the 21st century, for they will increase the economic capacity of the projected smaller workforce to finance future government programs along with the obligations and commitments for the baby boomers’ retirement. Updating the Medicare benefit package may be a necessary part of any realistic reform program to address the legitimate expectations of an aging society for health care, both now and in the future. Expanding access to prescription drugs could ease the significant financial burden some Medicare beneficiaries face because of outpatient drug costs. Such changes, however, need to be considered as part of a broader initiative to address Medicare’s current fiscal imbalance and promote the program’s longer-term sustainability. Balancing these competing concerns may require the best from government-run programs and private sector efforts to modernize Medicare for the future. Further, the Congress should consider adequate fiscal incentives to control costs and a targeting strategy in connection with any proposal to provide new benefits such as prescription drugs. expectation and the future projected growth of the program, some additional revenue sources may in fact be a necessary component of Medicare reform. However, it is essential that we not take our eye off the ball. The most critical issue facing Medicare is the need to ensure the program’s long range financial integrity and sustainability. The 1999 annual reports of the Medicare Trustees project that program costs will continue to grow faster than the rest of the economy. Care must be taken to ensure that any potential expansion of the program be balanced with other programmatic reforms so that we do not worsen Medicare’s existing financial imbalances. Current budget surpluses represent both an opportunity and an obligation. We have an opportunity to use our unprecedented economic wealth and fiscal good fortune to address today’s needs but an obligation to do so in a way that improves the prospects for future generations. This generation has a stewardship responsibility to future generations to reduce the debt burden they will inherit, to provide a strong foundation for future economic growth, and to ensure that future commitments are both adequate and affordable. Prudence requires making the tough choices today while the economy is healthy and the workforce is relatively large. National saving pays future dividends over the long term, but only if meaningful reform begins soon. Entitlement reform is best done with considerable lead-time to phase in changes and before the changes that are needed become dramatic and disruptive. The prudent use of the nation’s current and projected budget surpluses combined with meaningful Medicare and Social Security program reforms can help achieve both of these goals. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Subcommittee Members may have. For future contacts regarding this testimony, please call Paul L. Posner, Director, Budget Issues, at (202) 512-9573 or William J. Scanlon, Director, Health Financing and Public Health Issues at (202) 512-7114. Other individuals who made key contributions include Linda F. Baker, Laura A. Dummit, John C. Hansen, Tricia A. Spellman, and James R. McTigue. (201033/935352) | Pursuant to a congressional request, GAO discussed options for increasing Medicare beneficiaries' access to prescription drugs, focusing on the: (1) factors contributing to the growth in prescription drug spending and efforts to control that growth; and (2) design and implementation issues to be considered regarding proposals to improve seniors' access to affordable prescription drugs. GAO noted that: (1) the Medicare benefit package provides virtually no coverage; (2) in 1996, almost one third of beneficiaries had employer-sponsored health coverage, as retirees, that included drug benefits; (3) more than 10 percent of beneficiaries received coverage through Medicaid or other public programs; (4) to protect themselves against drug costs, the remainder of Medicare beneficiaries can choose to enroll in a Medicare Choice plan with drug coverage or purchase a Medigap policy; (5) however, the availability, breadth, and price of such coverage is changing as the costs of expanded prescription drug use drives employers, insurers, and managed care plans to adopt new approaches to control the expenditures for this benefit; (6) over the past 5 years, prescription drug expenditures have grown substantially, both in total and as a share of all health care outlays; (7) prescription drug spending grew an average of 12.4 percent per year from 1993 to 1998, compared with a 5 percent annual growth rate for health care expenditures overall; (8) total drug expenditures have been driven up by the following factors: (a) both greater utilization of drugs and the substitution of higher-priced new drugs for lower-priced existing drugs; (b) private insurance coverage for drugs; (c) biotechnology advances and a growing knowledge of the human immune system; and (d) advertising of drugs; (9) all of these factors suggest the need for effective cost control mechanisms; (10) a common technique to manage pharmacy care and control costs is to use a formulary, which can affect how frequently a drug is prescribed and purchased and, therefore, can affect its market share; (11) another way in which the market has been transformed is through the use of pharmacy benefit managers by health plans and insurers to administer and manage prescription drug benefits; (12) expanding access to more affordable prescription drugs could involve either subsidizing prescription drug coverage or allowing beneficiaries access to discounted pharmaceutical prices; (13) the design of a drug coverage option, as well as its implementation, will determine the effect of the option on beneficiaries, Medicare or federal spending, and the pharmaceutical market; (14) a new benefit would need to be crafted to balance competing concerns about the sustainability of Medicare, federal obligations, and the hardship faced by some beneficiaries; and (15) the effect of granting some beneficiaries access to discounted prices will hinge on details such as the price of the drugs after the discount, how discounts are determined and secured, and which beneficiaries are eligible. |
Several actions—both by the Service and the Congress—led us to remove the Service’s transformation efforts and long-term outlook from our high- risk list. In 2001, we made this designation because the Service’s financial outlook had deteriorated significantly. The Service had a projected deficit of $2 billion to $3 billion, severe cash flow pressures, debt approaching the statutory borrowing limit, cost growth outpacing revenue increases, and limited productivity gains. Other challenges the Service faced included liabilities that exceeded assets by $3 billion at the end of fiscal year 2002 major liabilities and obligations estimated at close to $100 billion, a restructuring of the workforce due to impending retirements and operational changes, and long-standing labor-management relations problems. We raised concerns that the Service had no comprehensive plan to address its financial, operational, or human capital challenges, including its plans for reducing debt, and it did not have adequate financial reporting and transparency that would allow the public to understand changes in its financial situation. Thus, we recommended that the Service develop a comprehensive plan, in conjunction with other stakeholders, which would identify the actions needed to address its challenges and provide publicly available quarterly financial reports with sufficient information to understand the Service’s current and projected financial condition. As the Service’s financial difficulties continued in 2002, we concluded that the need for a comprehensive transformation of the Service was more urgent than ever and called for Congress to act on comprehensive postal reform legislation. The Service’s basic business model, which assumed that rising mail volume would cover rising costs and mitigate rate increases, was outmoded as First-Class Mail volumes stagnated or deteriorated in an increasingly competitive environment. Since 2001, the Service’s financial condition has improved and it has reported positive net incomes for each of the last 4 years (see fig. 1). The Service has made significant progress in addressing some of the challenges that led to its high-risk designation. For example, the Service’s management developed a Transformation Plan and has demonstrated a commitment to implementing this plan. Since our designation in 2001, the Service has: Reduced workhours and improved productivity: The Service has reported productivity gains in each year. According to the Service, its productivity increased by a cumulative 8.3 percent over that period, which generated $5.4 billion in cost savings. The Service reported eliminating over 170 million workhours over this period, with a 4.5 million workhour reduction in fiscal year 2006. Downsized its workforce: The Service has made progress in addressing some of the human capital challenges associated with its vast workforce, by managing retirements, downsizing, and expanding the use of automation. At the end of fiscal year 2006, the Service reported that it had 696,138 career employees, the lowest count since fiscal year 1993. Attrition and automation have allowed the Service to downsize its workforce by more than 95,000, or about 10 percent, since fiscal year 2001. Enhanced the reporting of its financial condition and outlook: The Service responded to recommendations we made regarding the lack of sufficient and timely periodic information on its financial condition and outlook that is publicly available between publications of its audited year-end financial statements by enhancing its financial reporting and providing regular updates to the financial statements on its Web site. The Service instituted quarterly financial reports, expanded the discussion of financial matters in its annual report, and upgraded its Web site to include these and other reports in readily accessible file formats. The 2003 pension act provided another key reason for why we removed the high-risk designation. Much of the Service’s recent financial improvement was due to the change from this law that reduced the Service’s annual pension expenses. Between fiscal years 2003 and 2005, the Service had a total of $9 billion in decreased pension expenses when compared to the annual expenses that would have been paid without the statutory change. This change enabled the Service to significantly cut its costs, achieve record net incomes, repay over $11 billion of outstanding debt, and delay rate increases until January 2006. The Service’s improved financial performance and condition during this time was also aided by increased revenue generated from growing volumes of Standard Mail (primarily advertising) and rate increases in June 2002 and January 2006. Standard Mail volumes grew by almost 14 percent from fiscal year 2001 to 2006, and Standard Mail revenues, when adjusted for inflation, increased by over 11 percent during the same time period. In June 2002, the Service implemented a rate increase (the price of a First-Class stamp increased from 34 cents to 37 cents) to offset rising costs. In January 2006, the Service implemented another rate increase (the price of a First-Class stamp increased from 37 cents to 39 cents) to generate the additional revenue needed to set aside $3.0 billion in an escrow account in fiscal year 2006 as required by the 2003 pension law. Revenues in fiscal year 2006 increased by about 4 percent from the previous year due largely to the January 2006 rate increase. The passage of the recent postal reform legislation was another reason why we removed this high-risk designation. Although noticeable improvements were being made to the Service’s financial, operational, and human capital challenges, we had continued to advocate the need for comprehensive postal reform legislation. After years of thorough discussion, Congress passed a comprehensive postal reform law in late December 2006 that provides tools and mechanisms that can be used to establish an efficient, flexible, fair, transparent, and financially sound Postal Service. Later in this statement, I will discuss how some specific tools and mechanisms can be used to address the continuing challenges facing the Service. The Service’s financial condition for fiscal year 2007 has been affected by the reform act, which, along with the May change in postal rates, will continue to affect its near- and long-term financial outlook. The Service will benefit financially from an increase in postal rates in May averaging 7.6 percent. Key steps in the rate process are provided in appendix I. The Service is estimating that it will gain an additional $2.2 billion in net income in fiscal year 2007 as a result of the new rates. The recent rate case, in addition to generating additional revenues, took significant strides in aligning postal rates with the respective mail handling costs. Some rate increases are particularly large—i.e., some catalog rates may increase by 20 to 40 percent. The new rates structure is aimed at providing the necessary incentives to encourage efficient mailing practices (e.g., shape, weight, handling, preparation, and transportation) and thereby encourage smaller rate increases and steady mail volumes in the longer run. At the beginning of fiscal year 2007 (before the enactment of the reform law), the Service expected to earn $1.7 billion in net income, which reflected the additional revenue the Service estimated it would receive from the May increase in postal rates. The Service, however, planned to increase its outstanding debt of $2.1 billion at the end of fiscal year 2006 by an additional $1.2 billion in fiscal year 2007 in order to help fund the expected $3.3 billion escrow requirement for 2007. Since enactment of the reform law, the Service has updated its expense projections. While the Service’s total expenses for fiscal year 2007 have been affected by passage of the act, those expenses not directly related to the act and total revenues have tracked closely to plan. The Service currently is estimating an overall fiscal year 2007 net loss of $5.2 billion, largely due to changes in either projected or actual Postal Service payments as a result from the act including: Accelerating funding of the Service’s retiree health benefit obligations: Beginning this fiscal year, the Postal Service must make the first of 10 annual payments into a newly created Postal Service Retiree Health Benefits Fund (PSRHBF) to help fund the Service’s significant unfunded retiree health obligations. The 2007 payment of $5.4 billion is due to be paid by September 30. The Service has accrued half of this expense— $2.7 billion—during the first 6 months of the fiscal year and will accrue $1.35 billion in each of the remaining 2 quarters. One-time expensing of funds previously set aside in escrow and eliminating future escrow payments: The act requires the Service to transfer the $3.0 billion it escrowed in fiscal year 2006 to the PSRHBF, which the Service recognized as a one-time expense in the first quarter of fiscal year 2007. The reform act also eliminated future escrow payments required under the 2003 pension law, including the $3.3 billion payment scheduled for fiscal year 2007. Transferring funding for selected military service benefits back to the Treasury: The act significantly reduced the Service’s civil service pension obligations by transferring responsibility for funding civilian pension benefits attributable to the military service of the Service’s retirees back to the Treasury Department, where it had been prior to enactment of the 2003 pension law. The reform act requires that any overfunding attributable to the military benefits as of September 30, 2006, be transferred to the PSRHBF by June 30, 2007. Eliminating certain annual Civil Service Retirement System (CSRS) pension funding requirements: The act eliminated the requirement that the Service fund the annual normal cost of its civil service employees and the amortization of the unfunded pension obligation that existed prior to transferring the military service obligations to the Treasury Department. The Service estimates that it will save $1.5 billion in fiscal year 2007 from eliminating the annual pension funding requirements and amortization payments. The result of these payments is a net increase in retirement-related expenses of $3.9 billion, which is $600 million higher than the expected $3.3 billion escrow payment for 2007 that was eliminated. Thus, the Service is planning to borrow $600 million more than initially budgeted to cover this shortfall. This increase is anticipated to result in the Service’s borrowing $1.8 billion in fiscal year 2007, which would bring its total outstanding debt to $3.9 billion by the end of the fiscal year. The Service has identified other factors and uncertainties that, depending on how results vary from budgeted estimates, could have a favorable or unfavorable impact on the Service’s projected net loss for fiscal year 2007. For example, volumes and revenues may be affected by a continued slowdown in the U.S. economy or unanticipated consequences of the recent rate decision. The Service has anticipated economic growth to pick up in the third and fourth quarters of this year, but a slowdown may depress volume growth below projected levels for the rest of the year. Furthermore, the unusual nature of the rate case creates uncertainties for the Service that may affect its financial results. These uncertainties include how the Service and its customers will respond to the: limited implementation times—the 2-month implementation period (the Postal Service Board of Governors decision on March 19, 2007, stated that most new rates would become effective on May 14, 2007) leaves little time for the Service to educate the public and business mailers on the new rate changes and to allow mailers sufficient time to adjust their mailing practices and operations accordingly; delayed implementation times—how mailers and the Service will be affected by the delay in implementing new Periodical rates until mid-July; magnitude of certain restructured rates, particularly for those specific types of mail that will experience rather significant increases, and the related impact on volumes and revenues; and unfamiliarity with restructured rates—the prices for many popular products, such as certain types of First-Class Mail, will experience significant shifts based on the shape of the mail. For example, figure 2 shows how the cost of First-Class Mail will differ based on its shape. Moreover, the Service’s expense projections may be susceptible to rising fuel prices due to the Service’s vulnerability in this area or that the outstanding contract negotiations for two of its major labor unions would vary from projected levels. Although the extent to which these factors and uncertainties will affect the Service’s financial condition for fiscal year 2007 is not known, they may affect its subsequent financial outlook. For example, if the Service finds that its financial performance and condition is weakening—either through revenue shortfalls or expense increases—it may decide to file another rate increase later this year. The new postal reform law provides new opportunities to address challenges facing the Service as it continues its transformation in a more competitive environment with a variety of electronic alternatives for communications and payments. Specifically, it provides tools and mechanisms to address the challenges of generating sufficient revenues, controlling costs, maintaining service, providing reliable performance information, and managing its workforce. Effectively using these tools will be key to successfully implementing the act and addressing these challenges. The Service continues to face challenges in generating sufficient revenues as First-Class Mail volume continues to decline and the mail mix changes. First-Class Mail, historically the class of mail with the largest volumes and revenues, saw volume shrink by almost 6 percent from fiscal year 2001 to 2006. The trends for First-Class Mail and Standard Mail, which currently combine for about 95 percent of mail volumes and 80 percent of revenues, experienced a historical shift in fiscal year 2005. For the first time, Standard Mail volumes exceeded those for First-Class Mail (see fig. 3). This shift has major revenue implications because: First-Class Mail generates the most revenue and is used to finance most of the Service’s institutional (overhead) costs (see fig. 4). Standard Mail generates less revenue per piece compared to First-Class Mail and it takes about two pieces of Standard Mail to make the same contribution to the Service’s overhead costs as one piece of First-Class Mail. Standard Mail is a more price-sensitive product compared to First-Class Mail because it competes with other advertising media. Also, because advertising, including Standard Mail, tends to be affected by economic cycles to a greater extent than First-Class Mail, a larger portion of the Service’s mail volumes is more susceptible to economic fluctuations. The act provides tools and mechanisms that can help address these revenue challenges by promoting revenue generation and retention of revenues. The act established flexible pricing mechanisms for the Service’s competitive and market-dominant products. For example, it allows the Service to raise rates for its market-dominant products, such as First-Class Mail letters, Standard Mail, and Periodicals, up to a defined price cap; exceed the price cap should extraordinary or exceptional circumstances arise; and use any unused rate authority within 5 years. For its competitive products, such as Priority Mail or Expedited Mail, the Service may raise rates as it sees fit, as long as each competitive products covers its costs and competitive products as a whole cover their attributable costs and make a contribution to overhead. The act also allows for the Service to retain any earnings, which may promote increased financial stability. First, to the extent the Service can generate net income to retain earnings, this could enhance its ability to weather economic downturns. For example, a slow economic cycle or sudden increase in fuel prices might not necessitate an immediate rate increase if sufficient retained earnings exist to cover related shortfalls. Furthermore, to the extent the Service can retain earnings as liquid assets, it may reduce the Service’s reliance on borrowing to offset cash shortfalls. The Service has stated that it will take out debt to cover cash shortfalls in fiscal year 2007 and projects that this increase will result in $3.9 billion of outstanding debt at the end of the year (see fig. 5). Controlling debt will be important because the Service needs to operate within its statutorily set borrowing limits ($3 billion in new debt each year, and $15 billion in total debt outstanding). Reducing debt was one of the key factors we cited in removing the Service’s high-risk designation. Uncertainties related to the recent rate decision and reform law may impact the extent to which the Service is able to address its revenue related challenges. The uncertainties include: How will mailers and volume respond to the new rate decision’s pricing signals? What types of innovative pricing methods will be allowed? How will the Service set rates under the new price cap system, and how will mailers respond to this additional flexibility? How will the Service and mailers be able to modify their information systems to accommodate more frequent rate increases? How will customer behavior change as prices change under the new system? To what extent will customers desire for mail be affected by privacy concerns, environmental concerns, preference for electronic alternatives, or efforts to establish Do Not Mail lists? How will the Service be able to enhance the value of its market-dominant and competitive products (e.g., predictable and consistent service, tracking and tracing capabilities, etc.)? What will the Service do with any retained earnings (e.g., improve its capital program, save to weather downturns in the economy)? The Service faces multiple cost pressures in the near- and long-term associated with the required multi-billion dollar payments into the PSRHBF, dealing with key cost categories experiencing above-inflation growth while operating under an inflationary-based price cap, and other costs associated with providing universal postal services to a growing network—one now expanding by about 2 million new addresses each year. While the reform act takes actions that increase current costs by improving the balance of retiree health benefit cost burdens between current and future ratepayers, it also eliminates other payments and provides opportunities to offset some of these costs pressures through efficiency gains that could restrain future rate increases. It will be crucial for the Service, however, to take advantage of this opportunity and achieve sustainable, realizable cost reductions and productivity improvements throughout its networks. Personnel expenses (which include wages, employee and retiree benefits, and workers’ compensation) have consistently accounted for nearly 80 percent of annual operating expenses, even though the Service has downsized its workforce by over 95,000 employees since fiscal year 2001. The Service’s personnel expenses have grown at rates exceeding inflation since fiscal year 2003 and are expected to continue dominating the Service’s financial condition. In particular, growth in retiree health benefit costs have, on average over the last 5 years, exceeded inflation by almost 13 percent each year. This growth is expected to continue due to (1) rising premiums, growth in the number of covered retirees and survivors, and increases in the Service’s share of the premiums; and (2) the Service will continue paying the employer’s share of the health insurance premiums of its retirees along with the required payments ranging from $5.4 billion to $5.8 billion into the PSRHBF in each of the following 9 years. While we recognize the cost pressures that will be placed on the Service as it begins prefunding its retiree health benefits obligations, we continue to believe that such action is appropriate to improve the fairness and balance of cost burdens for current and future ratepayers. Furthermore, beginning in fiscal year 2017, the Service might enjoy a significant reduction in its retiree health costs if its obligations are fully funded. In addition to these personnel expenses, the Service has also experienced growth in its transportation costs that exceeded the rate of inflation in fiscal years 2005 and 2006. Transportation costs represent the second largest cost category behind compensation and benefits. These costs grew by about 11 percent from fiscal year 2005 to 2006, largely due to rising fuel costs. In a February 2007 report, we stated that the Service is vulnerable to fuel price fluctuations and will be challenged to control fuel costs due to its expanding delivery network and inability to use surcharges. The Service has made some progress in containing cost growth, and pledged to cut another $5 billion of costs out of its system between fiscal years 2006 and 2010 through productivity increases and operational improvements. The Service has reported productivity increases for the last 7 years, but the reported increase in fiscal year 2006 was its smallest during this period. The Service has recently had trouble absorbing gains in mail volumes while achieving targeted workhour reductions. Although the Service has reduced its workhours in 6 of the last 7 years, in fiscal year 2006, its goal was to reduce workhours by 42 million, but the Service reported a decrease of only 5 million workhours. While both the recent rate decision and reform act seek to improve efficiencies in the postal networks, these developments will pose challenges to the Service. In terms of the rate case, the Service will be challenged to modify its mail processing and transportation networks to respond to changes in mailer behaviors (e.g., in the quantity and types of mail sent and how mail is prepared) to minimize their rates. Furthermore, the reform act provides an opportunity to address the Service’s cost challenges because it requires the Service to develop a plan that, among other things, includes a strategy for how the Service intends to rationalize the postal facilities network and remove excess processing capacity and space from the network, as well as identifying the costs savings and other benefits associated with network rationalization alternatives discussed in the plan. This plan provides an opportunity to address some concerns we have raised in previous work, in which we stated that it was not clear how the Service intended to realign its processing and distribution network and workforce, and that its strategy lacked sufficient transparency and accountability, excluded stakeholder input, and lacked performance measures for results. We are currently conducting ongoing work on the Service’s progress in this area over the past 2 years and will be issuing a report this summer with updated findings. Taking advantage of the opportunities available will have a direct impact on the Service’s ability to operate under an inflationary-based rate cap, achieve positive results, and limit the growth in its debt. If the Service is unable to achieve significant cost savings, it may have to take other actions such as borrow an increasing amount each year to make year-end property and equipment purchases and fund its retiree health obligations. The following uncertainties may have a significant impact on the Service’s ability to achieve real cost savings and productivity in the future: How will operating under a rate cap provide an incentive to control costs? How will the Service operate under a rate cap, if certain key costs continue to increase at levels above inflation (e.g., health benefit costs)? How will the new rate designs/structure lead to efficiency improvements throughout the mail stream? Will the Service’s implementation of its network realignment result in greater cost savings and improved efficiency? Will the Service achieve its expected return on investment and operational performance when it deploys the next phase of automated flat sorting equipment? How will the Service’s financial situation be impacted when the 10-year scheduled payments into the PSRHBF are completed? Will the balance of the PSRHBF—which is a function of the PSRHBF’s investment returns and the growth in the Service’s retiree health obligations—be sufficient to cover the Service’s retiree health obligation by the end of fiscal year 2016? The Service will be challenged to continue carrying out its mission of providing high-quality delivery and retail services to the American people. Maintaining these services while establishing reliable mechanisms for measuring and reporting performance will be critical to the Service’s ability to effectively function in a competitive market and meet the needs of various postal stakeholders, including: The Service—so that it can effectively manage its nationwide service and respond to changes and/or problems in its network. The Service’s customers (who may choose other alternatives to the mail)—so that they are aware of the Service’s expected performance, can tailor their operations to those expectations, and understand the Service’s actual performance against those targets. Oversight bodies—so that they are aware of the Service’s ability to carry out its mission while effectively balancing costs, service needs, and the rate cap; can hold the Service accountable for its performance; and understand service performance (whether reported problems are widespread or service is getting better or worse). The Service’s delivery performance standards and results have been a long-standing concern for mailers and Congress. We found inadequate information is collected and available to both the Service and others to understand and address delivery service issues. Specifically, the Service does not measure and report its delivery performance for most types of mail (representative measures of delivery performance cover less than one-fifth of mail volume and do not include key types of mail such as Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services), certain performance standards are outdated; and that progress has been hindered by a lack of management commitment and collaboration by the Service and mailers. Based on these findings, we recommended the Service take actions to modernize its delivery service standards, develop a complete set of delivery service measures, more effectively collaborate with mailers, and improve transparency by publicly disclosing delivery performance information. The Service has recently reported positive delivery results for the limited segment of mail for which the Service does track performance. It has reported on-time delivery performance improved in the first quarter of fiscal year 2007 for some single-piece First-Class Mail. However, issues such as late deliveries have been reported in places such as Chicago, Los Angeles, and El Paso; and for different types of mail such as Standard Mail and Periodicals. Figure 6 shows that delivery performance in Chicago for this type of mail was worse than the national average at the end of the first quarter for this fiscal year. The reform act provides an opportunity for the Service to address this challenge by establishing requirements for maintaining, measuring, and reporting on service performance. Specifically, the act identified four key objectives for modern service standards: enhance the value of postal services to both senders and recipients; preserve regular and effective access to postal services in all communities, including those in rural areas or where post offices are not self-sustaining; reasonably assure Postal Service customers delivery reliability, speed, and frequency consistent with reasonable rates and best business practices; and provide a system of objective external performance measurements for each market-dominant product as a basis for measurement of Postal Service performance. The act also required the Service to implement modern delivery performance standards, set goals for meeting these standards, and annually report on its delivery speed and reliability for each market- dominant product. Key steps specified in the act include that within 12 months of enactment (by December 2007) the Service must issue modern service standards, and within 6 months of issuing service standards the Service must, in consultation with the PRC, develop and submit to Congress a plan for meeting those standards. Furthermore, within 90 days after the end of each fiscal year the Service must report to PRC on the quality of service for each market-dominant product in terms of speed of delivery and reliability, as well as the degree of customer satisfaction with the service provided. These requirements provide opportunities to resolve long-standing deficiencies in this area. As the Service transitions to the new law, the following uncertainties may impact its ability to address challenges in maintaining, measuring, and reporting service performance in the future: How will the Service implement representative measures of delivery speed and reliability within the timeframes of the reform act, while taking cost and technological limitations into account? How much transparency will be provided to the PRC, Congress, mailers, and the American people, including the frequency, detail, and methods of reporting? Another challenge facing the Service is to provide reliable data to management, regulators, and oversight entities to assess financial performance. Accurate and timely data on Service costs, revenues, and mail volumes helps provide appropriate transparency and accountability for all postal stakeholders to have a comprehensive understanding of the Service’s financial condition and outlook and how postal rates are aligned with costs. Earlier I discussed the past issues we have raised related to the Service’s financial reporting and the improvements that the Service has recently made. We have also reported on the long-standing issues of ratemaking data quality that continue to persist. The act establishes new reporting and accounting requirements that should help to address this challenge. The major change is the establishment of, and authority provided, to the new PRC to help enhance the collection and reporting of information on postal rates and financial performance (see table 2). Service officials have acknowledged the importance of financial reporting, but stated that there are cost implications associated with these improvements. The Service has recognized that it will incur costs in complying with the Securities and Exchange Commission’s (SEC) internal control reporting rules and by changes needed to provide separate information for competitive and market-dominant products. We have reported that significant costs have been associated with complying with the SEC’s implementing regulations for section 404 of the Sarbanes-Oxley Act, but have also reported that costs are expected to decline in subsequent years given the first-year investment in documenting internal controls. As the Service transitions to these new reporting and accounting requirements, its ability to address future challenges in this area will be impacted by uncertainties including: How will the PRC use its discretion to define and implement the new statutory structure? What criteria will PRC use for evaluating the quality, completeness, and accuracy of ratemaking data, including the underlying accounting data and additional data used to attribute costs and revenues to specific types of mail? How will PRC balance the need for high-quality ratemaking data with the time and expense involved in obtaining the data? How will PRC structure any proceedings to improve the quality of ratemaking data and enable the Service and others to participate in such proceedings? What proceedings might PRC initiate to address data quality deficiencies and issues that PRC has raised in its recent decision on the rate case? How will the Service be impacted by the costs associated with complying with the SEC rules for implementing section 404 of the Sarbanes-Oxley Act, as well as for the requirement of separate information for competitive and market-dominant products? The Service will be challenged to manage its workforce as it transitions to operating in a new postal environment. The Service is one of the nation’s largest employers, with almost 800,000 full and part-time workers. Personnel-related costs, which include compensation and benefits, are the Service’s major cost category and are expected to increase due to the reform legislation requirements to begin prefunding retirement health benefit costs. We have reported on the human capital challenges facing the Service, but have found the Service has made progress in addressing some of these challenges by managing retirements, downsizing, and expanding the use of automation. Provisions in the reform act related to workforce management can build on these successes. As part of the Postal Service Plan mandated by the act, the Service must describe its long-term vision for realigning its workforce and how it intends to implement that vision. This plan is to include a discussion of what impact any facility changes may have on the postal workforce and whether the Postal Service has sufficient flexibility to make needed workforce changes. The Service, however, faces human capital challenges that will continue to impact its financial condition and outlook: Outstanding labor agreements: Labor agreements with the Service’s four major unions expired late in calendar year 2006. In January 2007, the Service reached agreements with two of these unions, including semi- annual cost-of-living adjustments (COLA) and scheduled salary increases. Labor agreements, however, remain outstanding for the other two unions that cover over 42 percent of its career employees. Workforce realignment: As the Service continues to make significant changes to its operations (i.e., rationalize its facilities, increase automation, improve retail access, and streamline its transportation network), it will be challenged to realign its workforce based on these changes. This challenge may become more significant as mailers alter their behavior in response to the new rate structure. These actions will require a different mix in the number, skills, and deployment of its employees, and may involve repositioning, retraining, outsourcing, and/or reducing the workforce. Retirements: The Service expects a significant portion of its career workforce—over 113,000 employees—to retire within the next 5 years. In particular, it expects nearly half of its executives to retire during this time. The Service’s decisions regarding these retirements (that is, whether or not to fill these positions, and if so, when, with whom, and where) may have significant financial and operational effects. The following uncertainties will affect the Service’s ability to address workforce-related challenges in the future: How will the Service be able to respond to operational changes? How will the Service balance the varying needs of diverse customers when realigning its delivery and processing networks? How will employees and employee organizations be affected and informed of network changes and how will the Service monitor the workplace environment? How will the resolutions to the outstanding labor agreements affect the Service’s financial condition? How will the Service take advantage of flexibilities, including allowing more casual workers to deal with peak operating periods? The Postal Service, the PRC, and mailers face a challenging environment with significant changes to make in the coming months related to implementing the recent rate decision and the new postal reform law. We have identified several major issues considered significant by various postal stakeholders, as well as areas related to implementation of the law that will warrant continued oversight. Specifically, focusing attention on these issues during this important transition period will help to ensure that the new statutory and regulatory requirements are carried out according to the intent of the reform act and that the Service’s future financial condition is sound. These key issues and areas for continued oversight include: the effect of the upcoming rate increases and statutory changes on the Postal Service’s financial condition; the decision by the Service whether or not to submit a rate filing under the old rate structure; actions by the PRC to establish a new price-setting and regulatory the Service’s ability to operate under an inflationary price cap while some of its cost segments are increasing above the rate of inflation; actions by the Service, in consultation with the PRC, to establish modern service standards and performance measures, and the Postal Service’s plan for meeting those standards; the Service’s ability to maintain high-quality delivery service as it takes actions to reduce costs and realign its infrastructure and workforce; and the PRC’s development of appropriate accounting and reporting requirements aimed at enhancing transparency and accountability of the Service’s internal data and performance results. One of the most important decisions for monitoring in the short term is whether or not the Service decides to file another rate increase before the new rate structure takes effect. The trade-offs involved in the Service’s decision on whether to file under the new or old systems include weighing the respective costs, benefits, and possible unintended consequences of the Service’s need for new rates along with the time and resources required by the Service, the PRC, and the mailing industry to proceed under either the new or old systems. For example, the Service may benefit from filing under the old system because it would allow the Service to further align costs with prices prior to moving into price-cap restrictions. Under the old rules, the Service would have to satisfy the “break-even” requirements that postal revenues will equal as nearly as practicable its total estimated costs. Under the new rules, the Service would have to ensure that rate increases for its market-dominant products do not exceed a cap based on the growth in the Consumer Price Index. Filing under the old system, however, could put additional strain on mailers and the PRC. In particular, the PRC would be reviewing the Service’s rate submission while transitioning to its new roles and responsibilities under the legislation—establishing a new organization structure, a new regulatory framework with new rules and reporting requirements, which must include time for public input, and a multitude of additional requirements. Recognizing these challenges, the Chairman of the PRC has suggested (and asked for public comments on) that rather than expending resources on extending the application of the old system, the PRC would work with the Service and mailers to implement the new regulatory systems even sooner than the 18 months allotted by the new law. This action could allow the Service to implement new rates sooner under the new regulatory system depending upon when the PRC completes its work and the Service chooses to file new rates. The Service’s decision will not only impact its financial performance and condition, but also the mailing industry and the focus of the PRC. Another key provision of the law that warrants close oversight is the requirement for the Service to develop modern service standards. We are encouraged by the Service’s actions to date to establish a workgroup that includes participants from the mailing industry to review and provide recommendations on service standards and measures. This workgroup is expected to complete their work in September of this year, and the Service is to make its decisions on the new service standards by December 20, 2007. The Service then has 6 months to provide Congress with a plan on how it intends to meet these standards, as well as its strategy for realigning and removing excess capacity from the postal network. We believe this plan is a particularly important opportunity to increase transparency in these areas, particularly given the changes to the Service’s plans for network realignment and the limited information available to the public. We will be reporting this summer on the status and results of the Service’s efforts to realign its mail processing network. Finally, the PRC’s role in developing reporting requirements is critical to enhancing the Service’s transparency regarding its performance results. Congress was particularly mindful in crafting the reform act to ensure that the provisions for additional pricing flexibility were balanced with increased transparency, oversight, and accountability. The new law provides the regulator with increased authority to establish reporting rules and monitor the Service’s compliance with service standards on an annual basis. The successful transformation of the Postal Service will depend heavily upon innovative leadership by the Postmaster General and the Chairman of the PRC, and their ability to work effectively with their employees, employee organizations, the mailing industry, Congress, and the general public. It will be important for all postal stakeholders to take full advantage of the unique opportunities that are currently available by providing input and working together, particularly as challenges and uncertainties will continue to threaten the Service’s financial condition and outlook. Chairman Carper, this concludes my prepared statement. I would be pleased to respond to any questions that you or the Members of the Subcommittee may have. For further information regarding this statement, please contact Katherine Siggerud, Director, Physical Infrastructure Issues, at (202) 512-2834 or at [email protected]. Individuals making key contributions to this statement included Teresa Anderson, Joshua Bartzen, Kenneth John, John Stradling, Jeanette Franzel, Shirley Abel, Scott McNulty, and Kathy Gilhooly. Postal Service submits proposal to Postal Rate Commission (PRC) Requests rate increases effective May 2007. Establishes pricing structure based on mail weights and shapes: Revises old structure which was primarily weight-based. Recognizes that different mail shapes have different processing costs. Gives mailers an opportunity to minimize their rates by altering shape of mail. 2/26/07 PRC issues recommended decision on Service’s proposal Issued after detailed administrative proceeding involving mailers, employee organizations, consumer representatives and competitors. Recommends revisions to many of the rates and rate designs submitted by the Increases rates substantially for some types of mail. Revised rates are intended to more accurately reflect costs and send proper Concurs with shape-based pricing structure and, according to the PRC, the change in rates will still meet the Service’s revenue needs. Anticipated that this would have been the last rate case initiated prior to implementation of the new rate structure established under the reform legislation and explained that its recommended rates are intended to provide a sound foundation for the future. 3/19/07 Postal Service’s Board of Governors issues decision to implement PRC- Implements most rates effective May 14, 2007. Asks PRC to reconsider some rates, most notably those for flat-sized Standard Mail, which is generally advertising and direct mail solicitations (this could lead to further changes in these rates). Delays rate implementation for Periodicals for over 2 months, citing reactions of magazine mailers and the publishing industry’s need to update software. The Forever Stamp will sell at the First-Class one-ounce letter rate, and will continue to be worth the price of a First-Class one-ounce letter even if that price changes. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | When GAO originally placed the U.S. Postal Service's (the Service) transformation efforts and long-term outlook on its high-risk list in early 2001, it was to focus urgent attention on the Service's deteriorating financial situation. Aggressive action was needed, particularly in cutting costs, improving productivity, and enhancing financial transparency. GAO testified several times since 2001 that comprehensive postal reform legislation was needed to address the Service's unsustainable business model, which assumed that increasing mail volume would cover rising costs and mitigate rate increases. This outdated model limited its flexibility and incentives needed to realize sufficient cost savings to offset rising costs, declining First-Class Mail volumes, unfunded obligations, and an expanding delivery network. This limitation threatened the Service's ability to achieve its mission of providing affordable, high-quality universal postal services on a self-financing basis. This testimony will focus on (1) why GAO recently removed the Service's transformation efforts and outlook from GAO's high-risk list, (2) the Service's financial condition in fiscal year 2007, (3) the opportunities and challenges facing the Service, and (4) major issues and areas for congressional oversight. This testimony is based on GAO's past work, review of the postal reform law, and updated information on the Service's financial condition. Key actions by both the Service and Congress have led GAO to remove the Service's transformation efforts and long-term outlook from its high-risk list in January 2007. Specifically, the Service developed a Transformation Plan and achieved billions in cost-savings, improved productivity, downsized its workforce, and improved its financial reporting. Congress enacted a law in 2003 that reduced the Service's annual pension expenses, which enabled it to achieve record net incomes, repay debt, and delay rate increases until January 2006. Finally, the postal reform law enacted in December 2006 provides tools and mechanisms that can be used to address key challenges facing the Service as it moves into a new regulatory and increasingly competitive environment. The two key factors that will affect the Service's financial condition for this fiscal year are the new reform law and new postal rates that go into effect in May. The reform law increases the costs of funding retiree health benefits but provides opportunities to offset some of these cost pressures through efficiency gains and eliminating certain pension payments. For the rest of the year, Service officials do not expect significant changes from its projected expenses and revenues. Other factors, such as costs for fuel or labor resolutions varying from plan, could affect the Service's projected outcome for this fiscal year. Congress's continued oversight of the Service's transformation is critical at this time of significant changes for the Service, Postal Regulatory Commission (PRC), and mailing industry. Also, key to a successful transformation is innovative leadership by the Postmaster General and the PRC Chairman and their ability to work effectively with stakeholders to realize new opportunities provided under the postal reform law. GAO has identified key issues and areas for oversight related to implementing the reform law and new rate-setting structure, as well as other challenges to ensure the Service remains financially sound. |
The Social Security Act of 1935 authorized the Social Security Administration (SSA) to establish a record-keeping system to manage the Social Security program, which resulted in the creation of the SSN. Through a process known as “enumeration,” unique numbers are created for every person as a work and retirement benefit record. Today, SSA issues SSNs to most U.S. citizens, as well as non-citizens lawfully admitted to the United States with permission to work. Because the SSN is unique for every individual, both the public and private sectors increasingly use it as a universal identifier. This increased use, as well as increased electronic record keeping by both sectors, has eased access to SSNs and potentially made this information more vulnerable to misuse, including identity theft. Specifically, SSNs are a key piece of information used to create false identities for financial misuse or to assume another individual’s identity. Most often, identity thieves use SSNs belonging to real people. However, the Federal Trade Commission’s (FTC) identity theft victim complaint data has shown that only 30 percent of identity theft victims know how thieves obtained their personal information. The FTC estimated that over a 1-year period, nearly 10 million people discovered they were victims of identity theft, translating into estimated losses of billions of dollars. There is no one law that regulates the overall use of SSNs by all levels and branches of government. However, the use and disclosure of SSNs by the federal government is generally restricted under the Privacy Act of 1974. Broadly speaking, this act seeks to balance the government’s need to maintain information about individuals with the rights of individuals to be protected against unwarranted invasions of their privacy. Section 7 of the act requires that any federal, state, or local government agency, when requesting an SSN from an individual, tell individuals whether disclosing the SSN is mandatory or voluntary, cite the statutory or other authority under which the request is being made, and state what uses it will make of the individual’s SSN. Additional federal laws also place restrictions on public and private sector entities’ use and disclosure of consumers’ personal information, including SSNs, in specific instances. As shown in table 1, some of these laws require certain industries, such as the financial services industry, to protect individuals’ personal information to a greater degree than entities in other industries. In 1998, Congress also enacted a federal statute that criminalizes fraud in connection with the unlawful theft and misuse of personal identifiable information, including SSNs. The Identity Theft and Assumption Deterrence Act made it a criminal offense for a person to “knowingly transfer, possess, or use without lawful authority,” another person’s means of identification “with the intent to commit, or to aid or abet, or in connection with, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable state or local law.” Under the act, an individual’s name or Social Security number is considered a “means of identification.” In addition, in 2004, the Identity Theft Penalty Enhancement Act established the offense of aggravated identity theft in the federal criminal court, which is punishable by a mandatory two-year prison term. Many states have also enacted laws to restrict the use and display of SSNs. For example, in 2001, California enacted a law that generally prohibited companies and persons from engaging in certain activities with SSNs, such as posting or publicly displaying SSNs, or requiring people to transmit an SSN over the Internet unless the connection is secure or the number is encrypted. In our prior work, we identified 13 states—Arizona, Arkansas, Connecticut, Georgia, Illinois, Maryland, Michigan, Minnesota, Missouri, Oklahoma, Texas, Utah, and Virginia—that have passed laws similar to California’s. While some states, such as Arizona, have enacted virtually identical restrictions on the use and display of SSNs, other states have modified the restrictions in various ways. For example, unlike the California law, which prohibits the use of the full SSN, the Michigan statute prohibits the use of more than four sequential digits of the SSN. Some states have also enacted other types of restrictions on the uses of SSNs. For example, Arkansas, Colorado, and Wisconsin prohibit the use of a student’s SSN as an identification number. Other recent state legislation places restrictions on state and local government agencies, such as Indiana’s law that generally prohibits state agencies from releasing SSNs unless otherwise required by law. A number of federal laws and regulations require agencies at all levels of government to frequently collect and use SSNs for various purposes. Beginning with a 1943 Executive Order issued by President Franklin D. Roosevelt, all federal agencies were required to use the SSN exclusively for identification systems of individuals, rather than set up a new identification system. In later years, the number of federal agencies and others relying on the SSN as a primary identifier escalated dramatically, in part, because a number of federal laws were passed that authorized or required its use for specific activities. For example, agencies use SSNs for internal administrative purposes, which include activities such as identifying, retrieving, and updating records; to collect debts owed to the government and conduct or support research and evaluations, as well as use employees’ SSNs for activities such as payroll, wage reporting, and providing employee benefits; to ensure program integrity, such as matching records with state and local correctional facilities to identify individuals for whom the agency should terminate benefit payments; and for statistics, research, and evaluation. Table 2 provides an overview of federal statutes that address government collection and use of SSNs. In some cases, these statutes require that state and local government entities collect SSNs. Some government agencies also collect SSNs because of their responsibility for maintaining public records, which are those records generally made available to the public for inspection by the government. Because these records are open to the public, such government agencies, primarily at the state and local levels, provide access to the SSNs sometimes contained in those records. Based on a survey of federal, state, and local governments, we reported in 2004 that state agencies in 41 states and the District of Columbia displayed SSNs in public records; this was also true in 75 percent of U.S. counties. We also found that while the number and type of records in which SSNs were displayed varied greatly across states and counties, SSNs were most often found in court and property records. Public records displaying SSNs are stored in multiple formats, such as electronic, microfiche and microfilm, or paper copy. While our prior work found that public access to such records was often limited to inspection of the individual paper copy in public reading rooms or clerks’ offices, or request by mail, some agencies also made public records available on the Internet. In recent years, some agencies have begun to take measures to change the ways in which they display or provide access to SSNs in public records. For example, some state agencies have reported removing SSNs from electronic versions of records, replacing SSNs with alternative identifiers in records, restricting record access to individuals identified in the records, or allowing such individuals to request the removal of their SSNs from these records. Certain private sector entities, such as information resellers, consumer reporting agencies (CRAs), and healthcare organizations collect SSNs from public and private sources, as well as their customers, and primarily use SSNs for identity verification purposes. In addition, banks, securities firms, telecommunication firms, and tax preparers engage in third party contracting and sometimes share SSNs with their contractors for limited purposes, generally when it is necessary and unavoidable. Information resellers are businesses that specialize in amassing personal information, including SSNs, and offering informational services. They provide their services to a variety of customers, such as specific businesses clients or through the Internet to the general public. Large or well known information resellers reported that they obtain SSNs from various public records, such as records of bankruptcies, tax liens, civil judgments, criminal histories, deaths, and real estate transactions. However, some of these resellers said they are more likely to rely on SSNs obtained directly from their clients, who may voluntarily provide such information, than those found in public records. In addition, in our prior review of information resellers that offer their services through the Internet, we found that their Web sites most frequently identified public or nonpublic sources, or both, as their sources of information. For example, a few Internet resellers offered to conduct background investigations on individuals by compiling information from court records and using a credit bureau to obtain consumer credit data. CRAs, also known as credit bureaus, are agencies that collect and sell information about the creditworthiness of individuals. Like information resellers, CRAs also obtain SSNs from public and private sources. For example, CRA officials reported that they obtain SSNs from public sources, such as bankruptcy records. We also found that these companies obtain SSNs from other information resellers, especially those that specialize in collecting information from public records. However, CRAs are more likely to obtain SSNs from businesses that subscribe to their services, such as banks, insurance companies, mortgage companies, debt collection agencies, child support enforcement agencies, credit grantors, and employment screening companies. Organizations that provide health care services, including health care insurance plans and providers, are less likely to obtain SSNs from public sources. These organizations typically obtain SSNs either from individuals themselves or from companies that offer health care plans. For example, individuals enrolling in a health care plan provide their SSNs as part of their plan applications. In addition, health care providers, such as hospitals, often collect SSNs as part of the process of obtaining information on insured people. We found that the primary use of SSNs by information resellers, CRAs, and health care organizations is to help verify the identity of individuals. Large information resellers reported that they generally use the SSN as an identity verification tool, though they also use it for matching internal databases, identifying individuals for their product reports, or conducting resident or employment screening investigations for their clients. CRAs use SSNs as the primary identifier of individuals in order to match information they receive from their business clients with information on individuals already stored in their databases. Finally, health care organizations also use the SSN, together with information such as name, address, and date of birth, for identity verification. In addition to their own direct use of customers’ SSNs, private sector entities also share this information with their contractors. According to experts, approximately 90 percent of businesses contract out some activity because they find either it is more economical to do so or other companies are better able to perform these activities. Banks, investment firms, telecommunication companies, and tax preparation companies we interviewed for our prior work routinely obtain SSNs from their customers for authentication and identification purposes and contract with other companies for various services, such as data processing, administrative, and customer service functions. Company officials reported that customer information, such as SSNs, is shared with contractors for limited purposes, generally when it is necessary or unavoidable. Further, these companies included certain provisions in their standard contact forms aimed at safeguarding customer’s personal information. For example, forms included electronic and physical data protections, audit rights, data breach notifications, subcontractor restrictions, and data handling and disposal requirements. Although federal and state laws have helped to restrict SSN use and display, and public and private sector entities have taken some steps to further protect this information, our prior work identified several remaining vulnerabilities. While government agencies have since taken actions to address some of the identified SSN protection vulnerabilities in the public sector, private sector vulnerabilities that we previously identified have not yet been addressed. Consequently, in both sectors, vulnerabilities remain to protecting SSNs from potential misuse by identity thieves and others. In our prior work, we found that several vulnerabilities remain to protecting SSNs in the public sector, and in response, some of these vulnerabilities have since been addressed by agencies. For example, in our review of government uses of SSNs, we found that some federal, state, and local agencies do not consistently fulfill the Privacy Act requirements that they inform individuals whether SSN disclosure is mandatory or voluntary, provide the statutory or other authority under which the SSN request is made, or indicate how the SSN will be used, when they request SSNs from individuals. To help address this inconsistency, we recommended that the Office of Management and Budget (OMB) direct federal agencies to review their practices for providing required information, and OMB has since implemented this recommendation. Actions have also been taken by some federal agencies in response to our previous finding that millions of SSNs are subject to exposure on individual identity cards issued under federal auspices. Specifically, in 2004, we reported that an estimated 42 million Medicare cards, 8 million Department of Defense (DOD) insurance cards, and 7 million Department of Veterans Affairs (VA) beneficiary cards displayed entire 9-digit SSNs. While the Centers for Medicare and Medicaid Services, with the largest number of cards displaying the entire 9-digit SSN, does not plan to remove the SSN from Medicare identification cards, VA and DOD have begun taking action to remove SSNs from cards. For example, VA is eliminating SSNs from 7 million VA identification cards and will replace cards with SSNs or issue new cards without SSNs between 2004 and 2009, until all such cards have been replaced. However, some of the vulnerabilities we identified in public sector SSN protection have not been addressed. For example, while the Privacy Act and other federal laws prescribe actions agencies must take to assure the security of SSNs and other personal information, we found that these requirements may not be uniformly observed by agencies at all levels of government. In addition, in our review of SSNs in government agency- maintained public records, we found that SSNs are widely exposed to view in a variety of these records. While some agencies reported taking actions such as removing SSNs from electronic versions of records, without a uniform and comprehensive policy, SSNs in these records remain vulnerable to potential misuse by identity thieves. Consequently, in both instances, we suggested that Congress consider convening a representative group of federal, state, and local officials to develop a unified approach to safeguarding SSNs used in all levels of government. Some steps have since been taken at the federal level to promote inter- agency discussion of SSN protection, such as creation of the President’s Identity Theft Task Force in 2006 to increase the safeguards on personal data held by the federal government. In April 2007, the Task Force completed its work, which resulted in a strategic plan aimed at making the federal government’s efforts more effective and efficient in the areas of identity theft awareness, prevention, detection, and prosecution. The plan’s recommendations focus in part on increasing safeguards employed by federal agencies and the private sector with respect to the personal data they maintain, including decreasing the unnecessary use of SSNs in the public sector. To that end, last month, OMB issued a memorandum requiring federal agencies to examine their use of SSNs in systems and programs in order to identify and eliminate instances in which collection or use of the SSN is unnecessary. In addition, the memo requires federal agencies to participate in governmentwide efforts to explore alternatives to agency use of SSNs as personal identifiers for both federal employees and in federal programs. In our reviews of private sector entities’ collection and use of SSNs, we found variation in how different industries are covered by federal laws protecting individuals’ personal information. For example, although federal laws place restrictions on reselling some personal information, these laws only apply to certain types of private sector entities, such as financial institutions. Consequently, information resellers are not covered by these laws, and there are few restrictions placed on these entities’ ability to obtain, use, and resell SSNs. However, recently proposed federal legislation, if implemented, may help to address this vulnerability. For example, the SSN Protection Act of 2007, as introduced by Representative Edward Markey, would give the Federal Trade Commission (FTC) rulemaking authority to restrict the sale and purchase of SSNs and determine appropriate exemptions. The proposed legislation would therefore improve SSN protection while also permitting limited exceptions to the purchase and sale of SSNs for certain purposes, such as law enforcement or national security. Vulnerabilities also exist in federal law and agency oversight for different industries that share SSNs with their contractors. For example, while federal law and oversight of the sharing of personal information in the financial services industry is very extensive, federal law and oversight of the sharing of personal information in the tax preparation and telecommunications industries is somewhat lacking. Specific actions to address these vulnerabilities in federal laws have not yet been taken, leaving SSNs maintained by information resellers and contractors in the tax preparation and telecommunications industries potentially exposed to misuse, including identity theft. We also found a gap in federal law addressing SSN truncation, a practice that would improve SSN protection if standardized. Specifically, in our Internet resellers report, several resellers provided us with truncated SSNs showing the first five digits, though other entities truncate SSNs by showing the last four digits. Therefore, because of the lack of SSN truncation standards, even truncated SSNs remain vulnerable to potential misuse by identity thieves and others. While we suggested that the Congress consider enacting standards for truncating SSNs or delegating authority to SSA or some other governmental entity to do so, SSN truncation standards have yet to be addressed at the federal level. The use of SSNs as a key identifier in both the public and private sectors will likely continue as there is currently no other widely accepted alternative. However, because of this widespread use of SSNs, and the vulnerabilities that remain to protecting this identifier in both sectors, SSNs continue to be accessible to misuse by identity thieves and others. Given the significance of the SSN in committing fraud or stealing an individual’s identity, it would be helpful to take additional steps to protect this number. As the Congress moves forward in pursuing legislation to address SSN protection and identity theft, focusing the debate on vulnerabilities that have already been documented may help target efforts and policy directly toward immediate improvements in SSN protection. To this end, we look forward to supporting the Subcommittee and the Congress however we can to further ensure the integrity of SSNs. Related to this, we have issued a report on the federal government’s provision of SSNs to state and local public record keepers, and we have also recently begun a review of the bulk sale of public records containing SSNs, including how federal law protects SSNs in these records when they are sold to entities both here and overseas. Mr. Chairman, this concludes my prepared testimony. I would be pleased to respond to any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact me at [email protected] or (202) 512-7215. In addition, contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this statement. Individuals making key contributions to this testimony include Jeremy Cox, Rachel Frisk, Ayeke Messam, and Dan Schwimer. Social Security Numbers: Internet Resellers Provide Few Full SSNs, but Congress Should Consider Enacting Standards for Truncating SSNs. GAO-06-495. Washington, D.C.: May 17, 2006. Social Security Numbers: More Could Be Done to Protect SSNs. GAO-06-586T. Washington, D.C.: March 30, 2006. Social Security Numbers: Stronger Protections Needed When Contractors Have Access to SSNs. GAO-06-238. Washington, D.C.: January 23, 2006. Social Security Numbers: Federal and State Laws Restrict Use of SSNs, yet Gaps Remain. GAO-05-1016T. Washington, D.C.: September 15, 2005. Social Security Numbers: Governments Could Do More to Reduce Display in Public Records and on Identity Cards. GAO-05-59. Washington, D.C.: November 9, 2004. Social Security Numbers: Use Is Widespread and Protections Vary in Private and Public Sectors. GAO-04-1099T. Washington, D.C.: September 28, 2004. Social Security Numbers: Use Is Widespread and Protections Vary. GAO-04-768T. Washington, D.C.: June 15, 2004. Social Security Numbers: Private Sector Entities Routinely Obtain and Use SSNs, and Laws Limit the Disclosure of This Information. GAO-04-11. Washington, D.C.: January 22, 2004. Social Security Numbers: Ensuring the Integrity of the SSN. GAO-03-941T. Washington, D.C.: July 10, 2003. Social Security Numbers: Government Benefits from SSN Use but Could Provide Better Safeguards. GAO-02-352. Washington, D.C.:May 31, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since its creation, the Social Security number (SSN) has evolved beyond its intended purpose to become the identifier of choice for public and private sector entities, and it is now used for myriad non-Social Security purposes. This is significant because a person's SSN, along with name and date of birth, are the key pieces of personal information used to perpetrate identity theft. Consequently, the potential for misuse of the SSN has raised questions about how private and public sector entities obtain, use, and protect SSNs. Accordingly, this testimony focuses on describing the (1) use of SSNs by government agencies, (2) use of SSNs by the private sector, and (3) vulnerabilities that remain to protecting SSNs. For this testimony, we primarily relied on information from our prior reports and testimonies that address public and private sector use and protection of SSNs. These products were issued between 2002 and 2006 and are listed in the Related GAO Products section at the end of this statement. We conducted our reviews in accordance with generally accepted government auditing standards. A number of federal laws and regulations require agencies at all levels of government to frequently collect and use SSNs for various purposes. For example, agencies frequently collect and use SSNs to administer their programs, link data for verifying applicants' eligibility for services and benefits, and conduct program evaluations. In the private sector, certain entities, such as information resellers, collect SSNs from public sources, private sources, and their customers and use this information for identity verification purposes. In addition, banks, securities firms, telecommunication firms, and tax preparers engage in third party contracting, and consequently sometimes share SSNs with their contractors for limited purposes. Vulnerabilities persist in federal laws addressing SSN collection and use by private sector entities. In particular, we found variation in how different industries are covered by federal laws protecting individuals' personal information. For example, although federal laws place restrictions on reselling some personal information, these laws apply only to certain types of private sector entities, such as financial institutions. Consequently, information resellers are not covered by these laws, and there are few restrictions placed on these entities' ability to obtain, use, and resell SSNs for their businesses. Vulnerabilities also exist in federal law and agency oversight for different industries that share SSNs with their contractors. For example, while federal law and oversight of the sharing of personal information in the financial services industry are very extensive, federal law and oversight of the sharing of personal information in the tax preparation and telecommunications industries are somewhat lacking. Moreover, in our Internet resellers report, several resellers provided us with truncated SSNs showing the first five digits, though other information resellers and consumer reporting agencies truncate SSNs to show the last four digits. Therefore, because of the lack of SSN truncation standards, even truncated SSNs remain vulnerable to potential misuse by identity thieves and others. While we suggested that the Congress consider enacting standards for truncating SSNs or delegating authority to the Social Security Administration or some other governmental entity to do so, SSN truncation standards have yet to be addressed at the federal level. |
For many years, federal agencies have been encouraged to use information technology to improve their communications with the public and to increase participation in the rulemaking process. One of the recommendations of the National Performance Review in September 1993 was to “se information technology and other techniques to increase opportunities for early, frequent, and interactive public participation during the rulemaking process ...” The Paperwork Reduction Act of 1995 states that the Director of OMB should promote the use of IT “to improve the productivity, efficiency, and effectiveness of Federal programs…” Similarly, a December 17, 1999, presidential memorandum on “Electronic Government” noted that “as public awareness and Internet usage increase, the demand for online Government interaction and simplified, standardized ways to access Government information and services becomes increasingly important.” Between June 2000 and February 2001, as information technology began to play an increasingly important role in agencies’ regulatory management and the facilitation of public participation in federal rulemaking, we reported on a number of associated opportunities and challenges. We pointed out that the use of information technology could reduce regulatory burden, improve the transparency of regulatory processes, and, ultimately, facilitate the accomplishment of regulatory objectives. We reported on innovative uses of information technology that representatives of federal or state agencies and nongovernmental organizations believed could be more widely used by federal regulatory agencies, and we recommended actions that could facilitate innovation, avoid duplication of effort, and potentially result in a broader and more consistent approach across federal agencies. The President, through the President’s Management Agenda, has advanced greater use of information technology across a range of government functions, including rulemaking. In July 2001, the President identified the expansion of e-government as one of the five priorities of his management agenda. To support this, OMB has developed an implementation strategy and identified 25 e-government initiatives, one of which is e-Rulemaking. In May 2002, the Director of OMB issued a memorandum to the heads of executive departments and agencies advising them of “our intention to consolidate redundant IT systems relating to the President’s on-line rulemaking initiative.” Citing OMB’s authority under the Clinger-Cohen Act of 1996, the Director said OMB had identified “several potentially redundant systems across the federal government that relate to the rulemaking process,” and indicated that consolidation of those systems could save millions of dollars. The administration said e-Rulemaking would “democratize an often closed process.” In late 2002, the President signed into law the E-Government Act of 2002. The act requires agencies, to the extent practicable, to accept public comments on proposed rules “by electronic means” and to ensure that a publicly accessible federal Web site contains “electronic dockets” for their proposed rules. The act also established an Office of Electronic Government within OMB, headed by an Administrator appointed by the President and required to work with the Administrator of OMB’s Office of Information and Regulatory Affairs in establishing the strategic direction of the e-government program and to oversee its implementation. Initially, OMB named the Department of Transportation (DOT) as the lead agency, or managing partner, for the e-Rulemaking effort, entrusting DOT with the day-to-day management of the project as well as collaboration with other e-Rulemaking agencies. In a previous report, we noted that “DOT had the most developed electronic docket system of the agencies that we contacted, covering every rulemaking action in the department and including all public comments received regardless of medium.” DOT served as managing partner until an independent consultant’s assessment concluded in August 2002 that the system operated by EPA was the optimal platform to serve as the consolidated electronic docket platform for the entire federal government. As a result of this assessment, OMB transferred managing partner responsibilities from DOT to EPA in late 2002. The e-Rulemaking initiative provides a single portal for businesses and citizens to access the federal rulemaking process and comment on proposed rules. It is composed of three modules. The first module was completed in January 2003 with the launch of the www.regulations.gov Web site. The second module, which is the focus of this report, will move beyond rule identification and a comment mechanism by establishing a governmentwide electronic docket management system into which all relevant regulatory supporting materials and public comments will be placed. The third and final module will create an electronic regulatory desktop to facilitate the rule development process. E-Rulemaking officials and the e-Rulemaking Initiative Executive Committee considered three alternative designs for the federal e- Rulemaking system and ultimately chose to develop and implement a single, centralized design. This decision was based upon studies that examined the costs, deployment risks, and security aspects of the three designs. Throughout the management of the e-Rulemaking initiative, e- Rulemaking officials have estimated that the centralized design would save money; the latest estimate is that it would save approximately $94 million over 3 years. Officials said they used their best professional judgment and information about the costs to develop and operate paper and electronic rulemaking systems. E-Rulemaking officials and the e-Rulemaking Initiative Executive Committee considered three designs for the system architecture of the governmentwide e-Rulemaking system. These were a centralized, tiered, or a distributed design. The key characteristics of the governmentwide e- Rulemaking system architecture would: enable agencies to manage content and workflow processes using variable access controls and role definitions; provide a robust and scaleable Web-based solution that supports the capture, conversion, and dissemination of high volumes of information; provide the public and stakeholders with the ability to perform electronic docket searching, viewing, and commenting across multiple agencies; and minimize the total cost of system ownership and management while delivering responsive service to agencies/subagencies, businesses, and citizens. Anyone with the ability to connect to the Internet with a standard industry browser would be able to access the e-Rulemaking system. Centralized design. The centralized design uses a minimal number of governmentwide e-Rulemaking system components in the same location to provide consistent access and services for citizens and agency staff across all dockets. Under this design, the features and functions of the governmentwide e-Rulemaking system are provided by component-based application architecture and all standard components are centrally located. These application components would reside on servers in a single location for delivery of services to the public and the agencies. Unlike the tiered and distributed designs, which are discussed in the next sections, the centralized design will not support existing agency docket systems nor will it distribute components of the governmentwide e-Rulemaking system in agency facilities. Tiered design. The tiered design utilizes a centralized governmentwide e- Rulemaking system to deliver all agency and citizen services, but is different from the centralized design because common hardware and software components installed in the governmentwide e-Rulemaking system are also installed at different agency sites to enhance system performance. For example, components like the database management or document management system may be placed on separate servers in the same or different locations to process information more efficiently to boost system performance, and the responsibility for the data is dispersed across multiple entities or agency sites and would be maintained locally at each site. Because the tiered design is based on the use of common hardware and software components, the governmentwide e-Rulemaking system is not linked to any existing agency specific docket system with its unique hardware and software. As a result, no customized software interfaces are needed. Distributed design. The distributed design integrates a centralized governmentwide e-Rulemaking system with existing agency-specific electronic docket systems while satisfying all governmentwide e-Rulemaking system requirements. This design links agencies with existing docket systems to the governmentwide e-Rulemaking system using customized software (middleware) to allow interconnectivity between agencies’ systems and the governmentwide e-Rulemaking system. This design provides the public with a means of searching across agency dockets and establishes the governmentwide e-Rulemaking system as an access point to the public and agencies for searching, reviewing, and commenting on dockets that reside on the agencies’ existing docket systems. In addition, agencies with docket systems will continue to perform their own workflow/business/docket life-cycle processing. In 2002, DOT, which was the managing partner for the e-Rulemaking initiative, contracted with a consulting firm to assess the capabilities of seven existing agency e-Rulemaking systems and prepare a business case describing alternative designs for the system and the recommended design. The firm was to provide advice to DOT about the best technical approach for the initiative, along with a full analysis of alternatives that leverage the use of existing technology to meet the vision, goals, and objectives of the initiative. Based on its assessment, the consulting firm’s August 2002 report recommended that EPA’s eDocket system would be the optimal platform for a governmentwide centralized e-Rulemaking system. The business case discussed two other alternative designs as well. DOT submitted this business case to OMB in September 2002. Based on the recommendation to use EPA’s e-Rulemaking system as the platform for a centralized system, OMB transferred managing partner responsibilities for the e-Rulemaking initiative from DOT to EPA. After EPA became the managing partner in late 2002, EPA submitted three additional business cases to OMB—in December 2002, September 2003, and September 2004—and recommended the centralized design. During this time period, e-Rulemaking officials also hired a contractor to refine the three alternative designs, summarize the costs and risks for each alternative, and recommend one alternative for implementation. E- Rulemaking officials used the contractor’s January 2004 report recommending the centralized alternative when preparing the last of the three business cases that was submitted to OMB. E-Rulemaking officials and the e-Rulemaking Initiative Executive Committee based the decision to select the centralized design over the tiered and distributed designs on three assessments that generally found that the centralized design was most cost effective, had the lowest risk for deployment and support instability, was the most secure, and was most likely to deliver the breadth and functionality sought by agencies and the public. E-Rulemaking officials hired a contractor to conduct two analyses based on cost and risk models, and the contractor obtained from a consulting firm a third assessment that was based on industry best practices and experiences in assessing similar architectures. The first assessment used a cost-analysis modeling tool used by the private sector and government agencies to analyze and estimate the costs associated with software application development. Using this model, the cost estimates to deliver and operate the three designs were $18.7 million for the centralized design, $21.1 million for the tiered design, and $87.2 million for the distributed design (assuming 10 agencies were integrated with the centralized governmentwide e-Rulemaking system). The model predicted it would take 1 year to deliver the centralized design and almost 3 years to deliver the distributed design. Also, according to the model, the complexity of the distributed design would present a high risk of instability to the overall operation and maintenance of the systems. The second assessment used a model that estimates the total cost of implementing and supporting a specific application within a commercial enterprise and assesses the expected risk of successful deployment and the benefit to the organization of adopting the new capability. Using this model, the cost to deliver and operate the three designs was $20.1 million for the centralized design, $22.8 million for the tiered design, and $94.9 million for the distributed design (assuming 10 agencies were integrated with the centralized governmentwide e-Rulemaking system). The model estimated that there was a 50 to 60 percent greater likelihood of unsuccessful deployment of the distributed design when compared to the other designs. The most significant risk factor accounting for this increased likelihood of failure is system complexity, resulting in major integration, infrastructure, and change management issues that tend to be difficult to resolve. The third assessment was done by a leading information technology firm with the intent of predicting the cost, risk, security, and supportability of the three designs based on industry best practices and the firm’s experience in assessing similar architectures. The firm recommended the centralized design because it provided for a consistent implementation across the government and was the lowest cost and least risky design. The firm rated the tiered design as “feasible but not recommended” for various reasons including the following. First, the firm said it would be difficult to maintain consistent data quality in a tiered design because the responsibility for data would be spread across multiple entities. Second, consistent security and business continuity policies could not be implemented because each agency would most likely manage its data according to its security policies and business continuity models. Third, data retrieval could be complex because data would be maintained locally at each agency. The firm did not recommend the distributed design for several reasons, including the highly complex implementation and high delivery risk and higher cost for maintenance and support due to the large number of disparate systems involved in the design. E-Rulemaking officials estimated that the federal government would save approximately $94 million over 3 years by deploying the e-Rulemaking system—$56 million in savings from eliminating duplication of systems and $38 million in savings from annual maintenance fees. Officials said that they developed this estimate prior to completing the three assessments previously described and that at the time they developed this estimate, there was a lack of published information about how much it cost to develop or operate paper or electronic rulemaking systems. They primarily used their professional judgment, information about costs for developing and operating EPA’s paper and electronic rulemaking systems, and information provided by OMB about the number of rulemaking entities to develop the estimate. The estimate assumes, among other things, that all rulemaking entities would either develop an electronic rulemaking system or, if they already had an electronic rulemaking system, they would continue to operate those systems. E-Rulemaking officials extensively collaborated with rulemaking agencies and used several methods to solicit the participation of those agencies during the e-Rulemaking effort, notably the use of interagency working groups. Most officials at these other rulemaking agencies said they had adequate opportunity to collaborate on the initiative and described the process as effective. They provided us with examples that supported their opinions. E-Rulemaking officials have successfully used a variety of collaboration tools to encourage participation and arrive at consensus among the participating agencies, particularly the use of interagency working groups. Two core working groups facilitate discussion and decision making for the project: (1) the e-Rulemaking Advisory Board, composed primarily of technical and policy staff from the participating agencies; and (2) the e- Rulemaking Initiative Executive Committee, composed of upper-level agency managers such as Chief Information Officers and individuals at the Assistant Secretary level, which was added after the project had begun to obtain upper management support from participating agencies—especially those that already have an electronic system in place. Additionally, e-Rulemaking officials have convened specialized working groups to tackle unique areas of concern, such as system design, project funding, or legal issues. These specialized working groups are open forums designed to foster consensus on decisions made on the e-Rulemaking initiative. Under e-Rulemaking officials’ guidance, specialized working groups are charged with developing specific proposals for the project and transmitting these proposals to the Advisory Board for discussion. The Advisory Board may consider revisions to the proposal before reaching a decision that represents a consensus of the participants. This decision is then sent to the Executive Committee, which makes a final decision on the proposals and recommends a course of action to OMB. Agency officials we spoke with noted that representatives of all agencies are free to participate on the Executive Committee, the Advisory Board, or any working group. The three levels of panels are supplemented by a variety of other collaborative tools and methods, including: one-on-one meetings with agency officials, surveys, e-mail communication, teleconferencing, and an online library of documents related to the initiative. EPA also provided public notice of its work in the Federal Register and held public forums on the East and West coasts to obtain the views of businesses, citizens, and interest groups regarding the design of the e-Rulemaking system. The tenor of our discussions with officials of 14 of the 27 agencies serving on the Advisory Board was that they were satisfied with the level of collaboration. Participating agencies indicated that they had adequate opportunity to provide input and described the collaboration of e- Rulemaking officials as effective. Officials from a few agencies even said that in terms of the e-government initiatives, the e-Rulemaking initiative was one of the better collaborative efforts in which they have participated. Agencies often praised e-Rulemaking officials’ concern for the unique needs of their agency. For example, one agency communicated its concern that it was not clear how the e-Rulemaking system would accommodate interim rules. While most rules are not effective until after the rulemaking process is concluded, some interim rules are effective immediately, for example, due to emergency conditions or other concerns related to health and safety. After receiving this comment, e-Rulemaking officials recognized the need to be clear about how interim rules would be processed and how comments on such rules would be addressed. The system was revised in accordance with the agency’s suggestions. In another instance, one agency disagreed with the implementation schedule it was given for its transition to the e-Rulemaking system. The agency’s officials anticipated that the transition date would overlap with a critical stage in their annual rulemaking cycle and expressed concern that any errors in processing and adopting rules during that time would be particularly harmful. E-Rulemaking officials readily agreed to change the schedule to account for this timing problem. Even when an agency’s suggestion was not incorporated into the system design, they acknowledged that e-Rulemaking officials treated their concerns fairly, completely, and they understood why the suggestion was rejected. Agencies frequently deferred to the needs of the group after expressing their individual preferences during the collaborative process. The opinions expressed by these officials are consistent with our prior work. We noted in a recent report that e-Rulemaking initiative officials have successfully used collaboration strategies to achieve consensus with partner agencies on funding contributions to the e-Rulemaking initiative based on agency-specific characteristics. While managing the development of the centralized e-Rulemaking system, e-Rulemaking officials have, for the most part, followed key practices for successfully managing a project. However, there are a few practices that officials did not incorporate into the management of the e-Rulemaking initiative, such as including system performance measures in written agreements with agencies. The first agencies began migrating to the centralized e-Rulemaking system in May 2005 and the public is scheduled to have access to the system in September 2005. While all agencies will eventually migrate to the centralized system, the schedule for doing so may change, due in part to funding issues. E-Rulemaking officials have also created a process for approving changes to the system based on concerns or issues identified by agencies or the public. E-Rulemaking officials, for the most part, followed key practices for successfully managing a project while developing the centralized e- Rulemaking system; however, there are a few practices that they did not follow. Table 1 summarizes the key practices they followed and those that they did not follow when developing the centralized e-Rulemaking system. This table does not indicate how well they followed each practice, but rather, indicates if the steps they took to manage the e-Rulemaking initiative reflected the presence of the practice. E-Rulemaking officials followed all of the key practices except two. They did not document completely decisions related to the approach they used to identify alternative designs. Also, EPA, as managing partner, does not have written agreements that address system performance measures with agencies participating in the e-Rulemaking initiative. Although the written agreements do not include performance measures, e-Rulemaking officials do have performance goals that they are measuring. For example, one goal is to have the centralized e-Rulemaking system available to the public and agency rule writers and managers 99.99 percent of the time. Officials also said they are developing a postimplementation review plan. EPA, as managing partner, signed written agreements with 15 agencies during fiscal years 2003 or 2004 that indicated EPA would establish performance measures, but the agreements did not include any details about them. These agreements did include expected outcomes, roles and responsibilities, and resource commitments. These 15 agencies—which committed to providing financial assistance or in-kind resources to the initiative—included the 5 agencies, or parts of agencies, that are migrating to the centralized e-Rulemaking system during the first migration phase. During fiscal year 2005, EPA plans to sign similar agreements with 20 additional agencies that include these three items. The additional agencies are those to whom OMB has issued budget guidance for 2005 regarding providing funding to EPA for the e-Rulemaking initiative. As additional agencies become involved in funding the initiative, e-Rulemaking officials said they plan to sign similar agreements with them. In May 2005, the agencies included in phase I of the migration schedule— EPA, the Department of Housing and Urban Development, the Animal and Plant Health Inspection Service within the Department of Agriculture, a portion of the Department of Homeland Security, and the National Archives and Records Administration—began migrating to the centralized e- Rulemaking system. Current plans are for 18 additional departments or agencies to begin using the system in fiscal year 2006, resources permitting. The remaining rulemaking departments or agencies are scheduled to begin using the system at different times during fiscal year 2007 and beyond. According to e-Rulemaking officials, the schedule may change depending on funding. They originally planned to migrate 10 agencies to the new system during phase I, but had to cut back due to funding shortfalls and late contributions, specifically in fiscal year 2004. When an agency is scheduled to migrate to the new system, it will go through a multistep process tailored to meet its needs. E-Rulemaking officials will sign an agreement that outlines the steps the agency will go through to migrate to the centralized system. Upon request, prior to initiating the implementation process, e-Rulemaking officials will meet with a department or agency to brief them on the initiative and demonstrate the system. The implementation steps, in chronological order, are: 1. 2. site survey (i.e., identify and analyze agency-specific technical and functional needs); 3. data preparation (i.e., develop agency-specific implementation); 4. agency configuration (i.e., develop technical and functional engineering requirements to support agency-unique implementation); 5. training (i.e., develop plans and conduct training); 6. usability testing (i.e., allow agency users to become familiar with the e- 7. data migration (i.e., migrate data to the system); 8. moving to production (i.e., agency goes “live” using the system); and 9. postproduction support. Five of the nine steps are high-level checkpoints at which the agency and e- Rulemaking officials agree that conditions are acceptable to move forward. These steps are the kick-off meeting, data preparation, training, usability testing, and moving to production. The centralized e-Rulemaking system will not remain static as additional agencies migrate to it, according to e-Rulemaking officials. They said that changes to the system will be made when valid concerns or issues are identified. Such concerns or issues could be raised by agencies that have already migrated to the system, agencies that will be migrating to the system, or public users of the system. E-Rulemaking officials are already planning changes to the system before phase II of the migration schedule begins. These changes are based on issues identified during beta testing by the agencies and usability testing by the public. According to e-Rulemaking officials, they have created a Change Control Board that will review and determine which requests for changes to the system will be granted. This board will also prioritize the changes that are to be made and obtain an estimate of the cost to implement them from the contractor assisting in the design and implementation of the system. A couple of agencies said they had concerns about whether the centralized system would include all the capabilities of their current electronic systems. For example, one agency said that their system tracks hearings as well as regulatory dockets. E-Rulemaking officials told these agencies that any such capabilities will be incorporated into future versions of the centralized e-Rulemaking system, provided adequate funding is provided. Moreover, they noted that because each agency signs an agreement with the e-Rulemaking Program Office before migrating to the centralized system, the agency can assure itself that the system has the capabilities it needs before signing the agreement and moving forward with migration. The process e-Rulemaking officials and the e-Rulemaking Initiative Executive Committee used to decide which of the three designs— centralized, tiered, or distributed—to develop and implement as well as the basis for that decision, was reasonable and adequately supported. Using two cost and risk models and comparing the three designs to industry best practices was a sound approach to use in order to select which design should be implemented. In addition, because there was a lack of published information about the costs to develop or operate either a paper or electronic rulemaking system, e-Rulemaking officials used their professional judgment and own experiences to estimate how much the centralized e-Rulemaking system could save the federal government. E-Rulemaking officials established a governance structure to collaborate with other agencies to obtain input on developing and implementing the centralized e-Rulemaking system and its collaboration efforts with other agencies have been extensive and well-received. Most of the agencies we contacted were very satisfied with the collaboration efforts and thought that e-Rulemaking officials listened to their ideas and used them when developing and implementing the system. Officials from a few agencies we interviewed even said that in terms of the e-government initiatives, the e- Rulemaking initiative was one of the better collaborative efforts in which they have participated. While managing the development of the centralized e-Rulemaking system, e-Rulemaking officials followed most of the key practices for successfully managing a project. E-Rulemaking officials could, however, improve their management of the e-Rulemaking initiative by including system performance measures in written agreements with agencies. Having such agreements would provide criteria for determining if the e-Rulemaking initiative is operating in the most effective, efficient, and economic manner possible. E-Rulemaking officials established a governance structure that allowed it to successfully collaborate with other agencies about how to develop and implement the centralized e-Rulemaking system and used most of the key practices included in this report for managing an initiative. To learn from how it managed this initiative and to build on the success of it, we recommend that the Administrator of EPA, as managing partner of this initiative, take steps to ensure that the written agreements between EPA and the participating agencies include performance measures that address issues such as system performance, maintenance, and cost savings. These measures are necessary to provide criteria for evaluating the effectiveness of the e-Rulemaking initiative as well as for determining if the initiative is operating in the most efficient and economical manner. The Administrator of EPA was provided a draft of this report for his review and comment. The EPA Assistant Administrator and Chief Information Officer provided written comments on the draft in an August 17, 2005, letter, which is reprinted in appendix III. The Assistant Administrator agreed that the report accurately describes the e-Rulemaking Initiative. She added that e-Rulemaking officials appreciated GAO’s recommendation to ensure the effectiveness of the initiative and they look forward to continuing to work with GAO for the continued success of the project. E- Rulemaking officials said they agree with GAO’s recommendation and they plan to implement it. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time we will send copies of this report to the Chair and Ranking Minority Member of the House Committee on Government Reform, the Administrator of EPA, and the Director of OMB. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-5837 or at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Thomas Beall, Boris Kachura, Steven Law, Joseph Santiago, Shellee Soliday, and Grant Turner. While the Office of Management and Budget (OMB) oversees the e- Rulemaking initiative, it named the Environmental Protection Agency (EPA) managing partner for the initiative. As managing partner, EPA heads the Program Management Office (PMO), which is staffed by officials from several rulemaking agencies participating in this initiative. The PMO staff manages the day-to-day activities of the e-Rulemaking initiative while the e- Rulemaking Initiative Executive Committee makes the final decisions about the initiative’s strategy, resources, and timetable. Hereafter, we refer to PMO officials as e-Rulemaking officials. Our first objective was to describe the approach EPA used to identify alternative designs for the e-Rulemaking system and how the decision to proceed with a single, centralized system design was made. To address this objective, we reviewed the Capital Asset Plan and Business Cases submitted by the Environmental Protection Agency (EPA) and the Department of Transportation (DOT). We also reviewed a contractor’s analysis of the capabilities of current e-Rulemaking systems, a contractor’s summary of assessments of the three identified alternative designs including analyses of operational risks and security vulnerabilities, and documentation related to the basis for e-Rulemaking officials’ and the e- Rulemaking Initiative Executive Committee’s recommendation about which design alternative to implement. We also interviewed OMB, e- Rulemaking, and DOT officials involved with developing information about the design alternatives, the selection of the centralized system design, and the development of the estimated cost savings associated with the centralized design alternative. Our second objective was to describe how EPA collaborated with other rulemaking agencies to obtain input about the e-Rulemaking system and those agencies’ views regarding the collaboration. To address this objective, we interviewed e-Rulemaking officials and reviewed documentation related to collaboration efforts such as those describing the purpose and composition of various committees and work groups and written agreements between agencies and EPA regarding funding of the system. To acquire agency views on the collaboration that has occurred, we randomly selected 12 of the 27 agencies that serve on the e-Rulemaking Initiative Advisory Board and interviewed officials from those agencies about their experiences with the collaboration efforts. Included in this set of selected agencies were (1) agencies that serve on the e-Rulemaking Executive Committee, (2) agencies that currently have an e-Rulemaking system, and (3) agencies that are heavily involved in rulemaking. We analyzed the agency officials’ responses to our questions on collaboration and identified those views that were most frequently expressed by officials from multiple agencies as a means of gauging the overall quality of the collaboration. In addition to the agencies included in the sample, we discussed collaboration with two additional agencies that serve on the Advisory Board. The first was DOT. Since we contacted DOT officials in relation to the first objective, we also discussed collaboration issues with them. We contacted the second agency, the National Archives and Records Administration (NARA), because many officials included in the original sample suggested that we talk to that agency since an official from NARA was heavily involved in meeting with all agencies to discuss the development of the e-Rulemaking system. We also discussed the methods used to obtain information from the public about the development of the e- Rulemaking system with PMO officials and reviewed related documents. Our third objective was to (1) determine whether EPA used key practices for successfully managing a project when managing the e-Rulemaking initiative and (2) describe EPA’s future plans for developing and implementing the centralized e-Rulemaking system. To determine whether e-Rulemaking officials followed key practices, we identified key practices for successfully managing a project by reviewing previous GAO work related to the e-government initiatives. The key practices included in prior GAO reports were based in part on studies done by federal, state, and local agencies, international agencies, and the private sector and on guidance provided by the federal government, state, local and international governments, private research groups, national associations, and educational institutions. GAO information technology staff agreed with the list of key practices we identified. After developing the list of key practices, we compared the list to the information we had gathered about e-Rulemaking officials’ management of the e-Rulemaking initiative. When making the comparison, we did not assess how well each key practice was followed, but rather we determined if the steps they took to manage the e- Rulemaking initiative reflected the presence of the practice. We did not attempt to assess the quality or extent of each practice’s implementation. In order to describe future plans for the initiative, we reviewed documents that addressed future plans such as the business cases and implementation schedules and interviewed e-Rulemaking officials. We did not assess the quality of these plans. Although we report cost and related data from contractor assessments of the three governmentwide e-Rulemaking system design alternatives, we did not examine the reliability of that data since our first objective was to describe how e-Rulemaking officials selected one of the three alternatives rather than to determine if the appropriate alternative was selected. We performed our work from December 2003 through June 2005 in accordance with generally accepted government auditing standards. Federal Rulemaking: Agencies’ Use of Information Technology to Facilitate Public Participation. GAO/GGD-00-135R. Washington, D.C.: June 30, 2000. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Electronic Government: Better Information Needed on Agencies’ Implementation of the Government Paperwork Elimination Act. GAO-01- 1100. Washington, D.C.: September 28, 2001. Regulatory Management: Communication About Technology-Based Innovations Can Be Improved. GAO-01-232. Washington, D.C.: February 12, 2001. Information Technology: OMB Leadership Critical to Making Needed Enterprise Architecture and E-government Progress. GAO-02-389T. Washington, D.C.: March 21, 2002. Electronic Government: Selection and Implementation of the Office of Management and Budget’s 24 Initiatives. GAO-03-229. Washington, D.C.: November 22, 2002. Electronic Government: Success of the Office of Management and Budget’s 25 Initiatives Depends on Effective Management and Oversight. GAO-03-495T. March 13, 2003. Electronic Rulemaking: Efforts to Facilitate Public Participation Can Be Improved. GAO-03-901. Washington, D.C.: September 17, 2003. Electronic Government: Potential Exists for Enhancing Collaboration on Four Initiatives. GAO-04-6. Washington, D.C.: October 10, 2003. Electronic Government: Initiatives Sponsored by the Office of Management and Budget Have Made Mixed Progress. GAO-04-561T. Washington, D.C.: March 24, 2004. Electronic Government: Federal Agencies Have Made Progress Implementing the E-Government Act of 2002. GAO-05-12. Washington, D.C.: December 10, 2004. Electronic Government: Funding of the Office of Management and Budget’s Initiatives. GAO-05-420. Washington, D.C.: April 25, 2005. | The E-Government Act of 2002 requires regulatory agencies, to the extent practicable, to ensure there is a Web site the public can use to comment on the numerous proposed regulations that affect them. To accomplish this, the Office of Management and Budget named the Environmental Protection Agency (EPA) as the managing partner for developing a governmentwide e-Rulemaking system that the public can use for these purposes. Issues GAO was asked to address include: (1) EPA's basis for selecting a centralized system, (2) how EPA collaborated with other agencies and agency views of that collaboration, and (3) whether EPA used key management practices when developing the system. E-Rulemaking officials and the e-Rulemaking Initiative Executive Committee considered three alternative designs and chose to implement a centralized e-Rulemaking system based on cost savings, risks, and security. Officials relied on an analysis of the three alternatives using two cost and risk assessment models and a comparison of the alternatives to industry best practices. Prior to completing this analysis, officials estimated the centralized approach would save about $94 million over 3 years. They said when they developed this estimate, there was a lack of published information about costs related to paper or electronic rulemaking systems. They used their professional judgment and information about costs for developing and operating EPA's paper and electronic systems, among other things, to develop the estimate. E-Rulemaking officials extensively collaborated with rulemaking agencies and most officials at the agencies we contacted thought the collaboration was effective. E-Rulemaking officials created a governance structure that included an executive committee, advisory board, and individual work groups that discussed how to develop the e-Rulemaking system. We contacted 14 of the 27 agencies serving on the advisory board and most felt their suggestions affected the system development process. Agency officials offered several examples to support their views, such as how their recommendations for changes to the system's design were incorporated. While managing the development of the centralized system, e-Rulemaking officials followed all but a few of the key practices for successfully managing an initiative. For example, officials did not have written agreements with participating agencies that included system performance measures. The first agencies began migrating to the centralized system in May 2005 with the public scheduled to have access in September 2005. Eventually, all rulemaking agencies will migrate to the centralized system; however, the schedule is tentative due in part to funding issues. As agencies migrate, e-Rulemaking officials are planning changes to the system including adding capabilities that exist in electronic systems operated by some agencies. |
Beginning in the 1940s, the Soviet Union undertook a massive program to produce nuclear weapons. To support this program, a network of facilities was built, with most of the major ones located in Russia. Ten closed, or “secret,” cities were built to house workers at the major sites. In the quest to produce nuclear weapons, the health and safety of workers—as well as the environmental impact of production—were not adequately considered. As the threat of nuclear confrontation has receded, the long-term consequences of the Soviet’s nuclear program are being examined more closely by international environmental and health experts. Since the breakup of the Soviet Union, information about many of the facilities, including levels of safety and environmental contamination, is becoming publicly available. At least 221 nuclear facilities—other than civil nuclear power reactors—operate in the former Soviet Union. (App. I lists the types of major facilities we identified and their locations.) These facilities cover a range of activities, such as (1) mining, milling, and processing uranium ore; (2) producing enriched uranium; (3) producing and processing nuclear materials and nuclear fuel; (4) assembling nuclear weapons; and (5) disposing of and storing nuclear waste. The largest number of operating nuclear facilities are in Russia. Of the 221 facilities identified, 99 (or about 45 percent) are in Russia, including all of the Soviet Union’s facilities to produce or reprocess plutonium. In addition, Russia maintains all of the facilities of the former Soviet Union that were used to design or assemble nuclear weapons. Russia also operates 31 of the 48 research, training, and experimental reactors. (See app. II for a list of research reactors in the former Soviet Union.) Most of the other countries of the former Soviet Union have nuclear facilities. For example, Kazakhstan operates a significant number of facilities, including five research reactors, one fuel fabrication plant, and at least 22 mining sites. It also contains what was a major nuclear testing area, Semipalatinsk, which closed in 1991. Ukraine has a large concentration of nuclear facilities, including research reactors and waste storage and disposal facilities. Uranium mining, milling, and ore processing is concentrated around the central Asian republics of Kazakhstan, Kyrgystan, Uzbekistan, and Tajikistan. These four republics and Ukraine have about 87 percent of the former Soviet Union’s 78 mining, milling, and ore-processing sites. In addition to the 221 operating nuclear facilities, Russia also has a fleet of nuclear-powered vessels, including 228 submarines, 7 icebreakers, and 1 transport ship. According to DOD, between 10,000 and 20,000 organizations in the former Soviet Union use different types of radiation sources in medicine, industry, and research. Figure 1 shows the distribution of the nuclear facilities discussed in this report. DOE, International Atomic Energy Agency (IAEA), and European Union officials, as well as other nuclear safety experts, told us that certain nuclear facilities in the former Soviet Union, particularly those that are part of the weapons complex, present safety risks. During our discussions with these experts, the following five factors emerged as the main contributors to unsafe conditions: (1) lack of technology as well as aging facilities and equipment, (2) the lack of awareness and commitment to the importance of safety, (3) the long-standing emphasis on production over safety, (4) the absence of independent and effective nuclear regulatory bodies, and (5) the lack of funds to improve safety. Several officials from DOE’s national laboratories and nuclear weapons facilities noted similarities between aging U.S. and former Soviet Union plutonium production and reprocessing facilities. In 1988, we reported that aging and deteriorating U.S. facilities resulted in safety and/or operational problems. DOE officials noted that while all of the U.S. plutonium production and reprocessing facilities have been closed, some of the former Soviet Union’s aging facilities continue to operate. DOE, IAEA, and European Union officials—as well as Russian officials—expressed concern about the safety of plutonium production reactors and associated reprocessing facilities at Krasnoyarsk, Tomsk, and Chelyabinsk. Two operating production reactors are located at Tomsk, and one is at Krasnoyarsk. Prior to 1987, 13 plutonium production reactors operated at these three sites. Ten of the reactors have been shut down. In 1994, Russia announced that it was no longer fully processing weapons grade plutonium at these sites and the plutonium was being placed in storage. The three remaining reactors continue to operate, however, and supply heat and electricity to nearby cities. Although Chelyabinsk’s production reactors were shut down several years ago, the site remains a major reprocessing center for spent fuel from civil nuclear power reactors and nuclear-powered submarines. While Russia plans to significantly expand its reprocessing capabilities at Krasnoyarsk, the project has stalled because of a lack of funding. Russia’s three operating plutonium production reactors are over 30 years old and share design characteristics with Chernobyl-style reactors, including the lack of a containment structure. However, the Krasnoyarsk reactor is located underground thereby reducing the potential release of radioactive material to the environment. Russia has denied DOE officials permission to visit the operating reactors at Tomsk and Krasnoyarsk because of their military sensitivity. Although detailed safety analyses are not available to DOE officials, they believe the reactors have safety problems because of their design and age. According to a 1994 study conducted by Pacific Northwest Laboratory (PNL), the reactors were designed and operated without the benefit of safety improvements made at other nuclear facilities. An official from Russia’s Gosatomnadzor (GAN), the agency responsible for safety at nuclear fuel cycle facilities, including plutonium production reactors, told us that the reactors need extensive upgrades to continue long-term operations and are “unreliable.” Furthermore, he noted that a small incident at one of these reactors could have “disastrous consequences.” In June 1994, Russia agreed with the United States to shut down the three remaining production reactors not later than the year 2000. Because the reactors will not be closed until an alternative source of energy is available, the United States has agreed to help Russia evaluate various alternatives. DOE, IAEA, and European Union officials told us that Russia’s reprocessing facilities present safety concerns. Reprocessing involves the use of chemical processes to separate uranium and plutonium from spent nuclear fuel. Under certain conditions, the chemical solutions can cause an explosion. DOE officials obtained first-hand information about the conditions at Russia’s reprocessing facilities after an accident at the Tomsk plant, which occurred in April 1993. In June 1993, DOE officials visited Tomsk to investigate the accident. Although they were not permitted to view the chemical tank that had exploded, they did see other parts of the facility. Several operational errors, such as improper mixing of chemicals in the reprocessing tank, and possible design flaws, such as inadequate tank ventilation, were identified as contributors to the accident. According to DOE officials, inadequate safety awareness at nuclear facilities in the former Soviet Union affects operational safety levels and increases the risks for accidents. DOE officials who visited Tomsk and Krasnoyarsk within the past 2 years in conjunction with a U.S.-Russian exchange program on reprocessing observed that the Russian safety practices were generally not comparable to U.S. practices. Despite their recent visits to Russian facilities, DOE officials said that they needed increased access to them—as well as more opportunities to discuss safety issues with their counterparts—to obtain a better understanding of the overall safety environment. A PNL official noted that access to and information about Russian facilities are improving. For example, he said that a U.S. team planned to visit the operating reactors at Tomsk and Krasnoyarsk in September 1995. According to an official from the Russian Ministry of Atomic Energy (MINATOM), Russia’s reprocessing facilities have many safety problems. MINATOM is responsible for most nuclear-related activities in Russia, including the weapons production complex and electricity generated by nuclear power. This official noted that since the breakup of the Soviet Union, the discipline of operators at these facilities has significantly deteriorated. He also said that the Soviet-era emphasis on meeting production goals rather than maintaining safety had hampered efforts to improve safety, which was better at other nuclear facilities, such as research institutes and design laboratories. An official from Russia’s nuclear regulatory body told us that although safety is becoming more important at Russian facilities, it is difficult to undo problems created many years ago. According to NRC, although GAN is Russia’s nuclear regulatory agency, it does not have the legal authority—backed by national legislation—to exercise strong and independent oversight; nor has it been adequately funded to carry out its mission. According to information furnished by DOE, although a 1992 Russian presidential decree gave GAN the overall responsibility for inspecting and licensing activities that involve handling radioactive material, its inspectors are not empowered to enforce compliance. The head of GAN’s nuclear fuel cycle enterprises, which are responsible for the safety of production reactors and reprocessing plants, told us that his agency’s regulatory authority is limited. He noted that although some safety changes were made, many recommendations GAN made after the Tomsk accident have been ignored. A 1994 Russian report noted that GAN had a skeletal staff supervising safety—only 22 percent of the authorized slots were filled—at nuclear weapons facilities. Furthermore, this report said that GAN was unable to carry out its responsibilities because the Russian Ministry of Defense had created obstacles to prevent inspections at nuclear defense facilities. DOE officials who have visited Russian nuclear facilities told us that accidents at nuclear facilities in the former Soviet Union—other than civil nuclear power reactors—would not be of the magnitude of the Chernobyl accident. Most of the accidents that have been reported at these facilities did not have widespread radiological consequences. For example, while the 1993 accident at the Tomsk reprocessing facility caused substantial damage to the facility, it contaminated a largely unpopulated area of about 123 square kilometers. The accident released a relatively small amount of contamination—about 40 curies—compared to approximately 50 million curies released after the Chernobyl accident. The Tomsk accident could have had more serious local consequences if the wind had carried the contamination to two large nearby cities. According to available information, most accidents and incidents—at facilities other than civil nuclear power reactors—have occurred at reprocessing plants in Russia. More than one-half of these accidents occurred from the 1950s through the 1970s. (See app. IV for more details about accidents at facilities in the former Soviet Union.) The environmental contamination caused by past and current practices at nuclear facilities in the former Soviet Union, especially Russia, is a more immediate concern than potential accidents. These facilities have generated massive amounts of nuclear waste and contamination that have created environmental problems. The possible migration of this contamination may also pose some risks to neighboring countries. For example, within the past few years there has been scientific and congressional concern that Alaska could be affected by this contamination. The majority of nuclear waste contamination is concentrated in Russia. Three plutonium production and reprocessing sites have been identified as the major sources of nuclear waste contamination from years of improper disposal practices. According to a June 1995 analysis prepared by a PNL scientist, the current level of discharge of radioactive material to the environment at these three sites is approximately 600 times greater than the remaining contamination from various other nuclear sources in Russia combined. This analysis also notes that the current radioactive inventory released from the nuclear weapons complex in Russia is approximately 1.7 billion curies, compared to about 2.6 million curies released by the U.S. nuclear weapons complex. Soviet-era nuclear waste practices have left a lasting imprint on Russia’s environment. For example, starting in the late 1940s, radioactive waste from the Chelyabinsk facilities was released directly into the Techa River and nearby lakes, buried at the site, and stored in tanks. According to DOE, although the direct discharge of radioactive waste into rivers and lakes was curtailed many years ago, the cumulative effect has left some areas uninhabitable. As the contamination migrates, it threatens the groundwater supplies and waterways that flow into the Arctic Ocean. As a result of releases from Chelyabinsk, about 18,000 people were relocated and more than 440,000 people received an elevated dose of radiation. Beginning in the 1960s, the Soviet Union began to inject liquid radioactive waste into deep underground wells, a practice that has been used extensively at both Tomsk and Krasnoyarsk. Radioactive waste from other facilities and activities throughout the former Soviet Union have caused contamination problems. For example, Kazakhstan’s Semipalatinsk and Russia’s Novaya Zemlya test sites for nuclear weapons were used by the Soviets for approximately 40 years. Estonia has radioactivity problems resulting from Soviet nuclear submarine training reactors that operated at Paldiski. Uranium tailings—radioactive particles and other hazardous materials—resulting from mining, milling, and ore processing have caused contamination in several republics of the former Soviet Union. Environmental concerns resulting from Russia’s nuclear fleets have received increased international attention in recent years. The primary source of concern is radioactive contamination from Russia’s nuclear submarines and nuclear-powered civilian icebreakers. Most of the concerns stem from four main sources: (1) the dumping of damaged submarine and icebreaker reactors into the Kara Sea, (2) submarine accidents, (3) the dumping of liquid and solid radioactive waste from the Russian fleets into the Kara and Barents seas and the Sea of Japan, and (4) the inadequate treatment of and storage capacity for fuel from nuclear-powered vessels. In 1993, the Russian government released a report describing over three decades of Soviet-era dumping of radioactive material in the ocean. The report noted that during this time, the former Soviet Union dumped 2 reactor compartments without spent nuclear fuel into the Sea of Japan and 16 reactors into the Kara Sea, 6 of which contained spent or damaged fuel. The report also cited submarine accidents as a source of radioactive contamination. In August 1985, a submarine accident at a shipyard near Vladivostok released significant amounts of radioactive material. In 1989, the submarine, Komsomolets, sank approximately 300 miles from Norway after a fire disabled the vessel. Although the submarine had nuclear fuel in its reactor and nuclear warheads on board when it sank, Russian and international expeditions have not found evidence of substantial contamination around the sunken vessel. Because Russia does not have adequate treatment and storage facilities for radioactive waste, it has not signed a 1993 amendment to Annex I, section 6, of the London Convention. This amendment prohibits the dumping of all radioactive waste or other radioactive matter, including low-level liquid waste, into the seas. In September 1994, Russia announced that it intended to continue to voluntarily comply with the ban on low-level liquid waste dumping. However, according to several U.S., international, and Russian reports, Russia has a severe shortage of adequate waste storage and disposal facilities for liquid waste as well as for spent fuel assemblies and decommissioned nuclear-powered submarines. An EPA official who recently visited Russia told us that Russian naval officials believe the decommissioned submarines pose an increasingly significant safety hazard. The international community, including the United States, has conducted several studies to assess the impact of Russia’s nuclear waste disposal practices on neighboring waterways, including the Arctic seas. Although these studies have not indicated significant contamination around the dump sites, they have not ruled out future problems. In a January 1995 report, the Office of Naval Research stated that nuclear waste in the Arctic and North Pacific regions poses no immediate threat to Alaskan citizens or its resources. According to an EPA official, there is reason to believe that the high-level radioactive material associated with the dumped reactors has yet to be released. Because this radioactive waste may start to enter the marine environment within a few years, the effects of this future contamination is uncertain. Some officials, including the Nuclear Safety Attache to the U.S. Mission (in Vienna, Austria) and IAEA’s Deputy Director, Division of Nuclear Safety, have expressed concerns to us about the inadequate control of radiation sources used in medicine, agriculture, research, and industry throughout the former Soviet Union. In May 1993, similar concerns were noted by several representatives from the former Soviet Union who were attending an IAEA forum on strengthening radiation protection and nuclear safety. The small size, portability, and value of these sources make them susceptible to misuse, improper disposal, or theft. Countries of the former Soviet Union have not established adequate systems to register, control, monitor, or account for radiation sources. These sources had been loosely controlled under the Soviet Union, but with its dissolution, the loss of centralized authority has left the new republics without adequate legal and regulatory structures. Representatives of some former Soviet Union republics have voiced concerns about the need to bring radiation sources under control, and some have admitted they do not know how many are still in use within their countries. Without an adequate control system, radiation sources may be lost, abandoned, stolen, or improperly disposed of, thereby creating the potential for human radiation exposure and localized environmental contamination. Numerous incidents involving the exposure of persons and contamination of areas have occurred over the past several years. For example, in 1994 a stolen source of radiation caused the death of a man and serious injury to his son in Estonia. In addition, the lack of control creates the potential for illicit trafficking of radiation sources to other countries. Several U.S. and international efforts focus on radioactive waste, radiation protection, and other related activities in the former Soviet Union. Collectively, these efforts are smaller in number and resources than programs aimed at improving the safety of Soviet-designed civil nuclear power reactors. Several U.S. and international officials told us that these reactors pose the most serious safety risk and require immediate attention. About a dozen countries and international organizations are providing assistance for projects related to, among other things, radiation protection and radioactive waste management in countries of the former Soviet Union. Among the countries providing assistance are Norway, Sweden, and Japan, which are all in close proximity to the former Soviet Union. These countries are concerned about the migration of contamination from nuclear facilities and other sources of radioactivity. According to an official from Norway’s Ministry of Foreign Affairs, Norway plans to spend about $20 million in 1995 on radiation protection and waste management projects. A Swedish official has estimated that Sweden has already spent about $10 million for similar projects. Japan plans to assist in underwriting the establishment of a joint venture between a Russian firm and a Japanese firm to construct and operate a storage and processing facility for liquid radioactive waste from the Russian Pacific Fleet. IAEA has initiated a program broadly aimed at strengthening radiation protection in the former Soviet Union. (See app. V for additional information about international assistance efforts.) As of August 1995, the United States had committed about $55 million to support various programs that primarily focus on the environmental and health effects of the long-term operation of the former Soviet Union’s nuclear weapons production complex, including activities associated with the production and processing of plutonium. The objective is to channel a modest amount of funds to primarily study issues of concern, such as the effects of radioactive waste contamination because of its potential impact on Alaska. The United States is not providing direct assistance to help remediate the nuclear waste contamination in the former Soviet Union. DOE is not authorized to provide such assistance, and both DOE and State Department officials said that such aid could be very costly because of the magnitude of the contamination problems. DOE, which is responsible for managing the cleanup of the U.S. nuclear weapons complex, faces a major challenge to clean up the radioactive waste generated by more than four decades of nuclear weapons production. As a result, DOE is interested in acquiring innovative nuclear waste cleanup technologies from foreign countries through technology exchanges and other cooperative programs. DOE believes that its environmental programs with countries of the former Soviet Union should provide some tangible benefits to accelerate the cleanup of the U.S. nuclear weapons complex. For example, DOE hopes to identify new cleanup technologies that could improve remediation at U.S. facilities through a $2 million technical cooperation program with Estonia. Additionally, DOE is contracting with various Russian and Ukrainian research institutes to identify cleanup technologies. Figure 2 summarizes the planned distribution of U.S. funding as of August 1995. 0.5% Environmental Protection Agency, $260,000 2.5% Nuclear Regulatory Commission, $1,355,000 0.6% Department of State, $330,000 3.3% Trade and Development Agency, $1,840,000 Department of Defense, $30,077,000 Note 1: Assistance from the Department of State includes $300,000 for the IAEA’s program of radiation protection in the former Soviet Union. Note 2: Percentages based on an amount equal to $55 million. As of March 31, 1995, about half of the $55 million had been disbursed by DOD, DOE, NRC, and the State Department. Of that amount, about $10 million has been spent for studying radioactive waste contamination, including $9 million to study Russian nuclear contamination of the Arctic region. (App. VI lists the expenditures by agency.) Specifically, U.S. programs focus on studying the disposal of nuclear waste by the former Soviet Union in the Arctic region (DOD/Office of Naval Research); assessing the radioactive waste contamination at a naval nuclear training facility in Estonia (DOE); developing technology on a cooperative basis with Russia to clean up radioactive waste (DOE); studying the health consequences of radiation contamination at Chelyabinsk and other locations in the former Soviet Union (DOE and DOD); upgrading and expanding a Russian facility that processes low-level liquid radioactive waste to prevent its continued dumping in the Arctic seas (EPA and Department of State); helping Russian and Ukrainian regulatory authorities establish regulatory control over radioactive materials, including the fuel cycle, the industrial and the medical uses of radioisotopes, and the disposal of radioactive materials (NRC); and studying options to replace power and steam lost as a result of the shutdown of the plutonium production reactors at Tomsk and Krasnoyarsk (TDA). (See app. VII for additional details about the status of these U.S. programs.) Information about the conditions at nuclear weapons facilities in the former Soviet Union is still emerging. With the exception of the plutonium production plants in operation, experts do not believe the other facilities present as broad a safety risk as Soviet-designed civil nuclear power reactors. The most immediate problem posed by these facilities is the extensive radioactive pollution that is the by-product of almost 50 years of nuclear weapons production. Recognizing that the costs associated with remediation are potentially enormous, the United States is committing modest resources for various environmental and health-related programs in some countries of the former Soviet Union. Sharing common problems associated with the cleanup of their respective nuclear weapons complexes, the United States and the countries of the former Soviet Union can benefit from mutual cooperation on both safety and environmental issues. The U.S. government has recognized the potential benefits of this cooperation and is undertaking some efforts with various Russian institutes to identify new cleanup technologies for potential use in the United States. Ultimately, the countries of the former Soviet Union are responsible for the safety of their nuclear facilities. Without independent and effective regulatory oversight, sustaining any safety improvements will be very difficult. For example, the strengthening of Gosatomnadzor as the regulatory body responsible for inspecting these facilities in Russia may be one of the most effective ways to improve safety at weapons complex facilities that do not meet safety requirements. The absence of nuclear laws in Russia, however, limits its effectiveness in carrying out its regulatory duties. We provided copies of a draft of this report to the Departments of Defense, Energy, and State; EPA; and NRC for their review and comment. DOE and State had no comments. We met with DOD officials, including the Senior Nuclear Weapon Safety Specialist, Office of the Assistant to the Secretary of Defense, Atomic Energy. We also met with EPA officials, including the Acting Science Adviser to the Assistant Administrator, Office of International Activities. Both DOD and EPA generally agreed with the report’s findings and provided clarifying information that we have incorporated in the text, as appropriate. NRC, while generally agreeing with our report, noted that we should have included the issue of safeguarding nuclear material in our discussion about nuclear safety and also indicated that Russia’s nuclear regulatory authority may have been diminished. Regarding the first point, we recognize that safeguarding nuclear material is an important issue but our report focused primarily on the operational safety of nuclear facilities in the former Soviet Union. A forthcoming GAO report will address U.S. assistance to improve methods of safeguarding nuclear material at facilities in the former Soviet Union. Regarding the last point, in September 1995 the Acting Deputy Chairman of GAN, Russia’s nuclear regulatory body, informed us that some of its functions were limited by a recent presidential decree. He noted, however, that GAN is responsible for inspecting plutonium production reactors and reprocessing facilities. (See app. IX for NRC’s comments and our response to them.) We also discussed information presented in the draft of this report with TDA’s Country Manager, New Independent States, who provided some clarifying information that we have incorporated, where appropriate. We also provided copies of the draft report to the European Union and the IAEA. The European Union noted that the most urgent issue is to establish appropriate local organizations in the former Soviet Union to develop a complete inventory of all radiation sources. To address our objectives, we interviewed officials and reviewed documentation from the Department of State, DOD, DOE and several of its national laboratories, NRC, and EPA. We also met with Russian officials who are knowledgeable about nuclear facilities in their country, as well as officials from international organizations, including the IAEA. Collectively, these experts have provided their insights concerning the safety of these facilities and the environmental impact from their operation. Appendix VIII explains our scope and methodology. We performed our work between September 1994 and August 1995 in accordance with generally accepted government auditing standards. Copies of this report are being sent to the Secretaries of State, Defense, and Energy; the Chairman of NRC; the Administrator of EPA; the Director of the Office of Management and Budget; the Director of the Trade and Development Agency; and interested congressional committees. We will also make copies available to others on request. Please contact me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix X. Legend: ARM = Armenia, AZR = Azerbaijan, BEL = Belarus, EST = Estonia, GEO = Georgia, KAZ = Kazakhstan, KYR = Kyrgystan, LAT = Latvia, LIT = Lithuania, MLD = Moldova, RUS = Russia, TJK = Tajikistan, UKR = Ukraine, UZB = Uzbekistan Note 1: This table may not list all operating nuclear facilities and does not include nuclear-powered submarines, icebreakers, and support ships in the Russian military and civilian fleets. It also does not include the nuclear test sites at Novaya Zemlya (Russia) and Semipalatinsk (Kazakhstan) because they were closed down in October 1990 and August 1991, respectively. Note 2: Empty cells in this table indicate that no known facilities are located at these locations. According to Russian nuclear experts, there are 41 research reactors in the former Soviet Union, 31 of which are in Russia. Of the 41 research reactors, 5 have suspended operation, 1 is under reconstruction and 1 is under construction. Table II.1 shows the name, location, and operating information for these research reactors. Fuel enrichment (percent of uranium- 235) Amount of uranium-235 in fuel (in kilograms) WWR-K, tank type (Institute of Nuclear Physics, Alma Ata) IGR, graphite impulse type (Semipalatinsk Test Site) IVG-1M, water-cooled impulse type (Semipalatinsk Test Site) RA, experimental gas-cooled (Semipalatinsk Test Site) IRT-M, pond type (Institute of Nuclear Physics, Riga) Hydra (IIN), solution impulse type (Kurchatov Institute) F-1, uranium-graphite type without forced cooling (Kurchatov Institute) GAMMA, vessel type (Kurchatov Institute) TWR, heavy water vessel type (Institute for Theoretical and Experimental Physics, Moscow) IRT, pond type (Engineering and Physics Institute, Moscow) WWR-C, tank type (Branch of Scientific and Research Physics-Chemistry Institute, Obninsk) AM, uranium-graphite type (Institute of Physics and Power Engineering, Obninsk) BR-10, fast breeder sodium cooled type (Institute of Physics and Power Engineering, Obninsk) BFS-1, fast reactor without forced cooling (Institute of Physics and Power Engineering, Obninsk) (continued) Fuel enrichment (percent of uranium- 235) Amount of uranium-235 in fuel (in kilograms) BFS-2, fast reactor without forced cooling (Institute of Physics and Power Engineering, Obninsk) IBR-30, fast reactor of pulse type (Institute of Nuclear Research, Dubna) IBR-2, fast reactor of pulse type (Institute of Nuclear Research, Dubna) WWR-M, pond type (St. Petersburg’s Institute of Nuclear Physics) PIK, tank-vessel type (St. Petersburg’s Institute of Nuclear Physics) IVV-2M, pond type (Ural Nuclear Center, Ekaterinburg, Branch of Research and Construction Institute for Energy Technique, Moscow) MIR, vessel type (Institute of Atomic Reactors, Dimitrovgrad) SM-2, vessel type (Institute of Atomic Reactors, Dimitrovgrad) RBT-10/1, pond type (Institute of Atomic Reactors, Dimitrovgrad) 10,000 kilowatts RBT-10/2, pond type (Institute of Atomic Reactors, Dimitrovgrad) 10,000 kilowatts RBT-6, pond type (Institute of Atomic Reactors, Dimitrovgrad) BIGR, uranium-graphite impulse type with air cooling, (Institute of Experimental Physics, Arzamas-16) BR-1, uranium-metal impulse type (Institute of Experimental Physics, Arzamas-16) BIR-2M, uranium-metal impulse type (Institute of Experimental Physics, Arzamas-16) VIR-2M, solution impulse type (Institute of Experimental Physics, Arzamas-16) IRT-T, pond type (Institute of Nuclear Physics of Tomsk Polytechnics Institute, Tomsk) WWR-T, tank type (Norilsk Mining Combine) WWR-M, tank type (Institute for Nuclear Research, Kiev) IR-100, pond type (Navy Institute of the Ministry of Defense, Sevastopol, Crimea) IRT-M, pond type (Institute of Nuclear Power, Minsk) IRT-M, pond type (Institute of Nuclear Physics, Tbilisi) WWR-CM, tank type (Institute of Nuclear Physics, Tashkent) (Table notes on next page) Under reconstruction. Waste disposal (Lake Karachay) Off-site contamination when lake dried; winds blew radioactive silt over tract 75 km. long and 1,800-2,700 square km.; over 63 settlements with 41,500 inhabitants affected One fatality; one case of severe radiation sickness requiring amputation of legs Electrode failed in ceramic melter and contents spilled onto building floor; furnace decommissioned in February 1987 Radioactive contamination of drainage passage Leak in storage site for untreated high-level waste Explosion in reprocessing equipment; two men received chemical burns and one died Large reprocessing tank exploded causing extensive plant damage and off-site contamination that spread over mostly forested area of approximately 123 square km.; no worker injuries reported Plutonium gases released by the plant’s ventilation system; no damage to the workshop or worker injuries (continued) No information available. Although the majority of international nuclear assistance to the countries of the former Soviet Union is focused on the safety of Soviet-designed civil nuclear power reactors, the international community is also providing some assistance related to radiation protection, radioactive waste management, and other activities not directly related to the nuclear power reactors. About a dozen countries and international organizations are involved in these other bilateral and multilateral assistance projects. Because there is no comprehensive compilation of this international assistance, estimating exactly how much each country has committed to promote nuclear safety and radiation protection issues other than civil nuclear power activities is difficult. Appendix VII describes U.S.efforts in this area. According to an official of Norway’s Ministry of Foreign Affairs, Norway’s assistance focuses largely on Russia, Estonia, Lithuania, and Ukraine with a primary concern for environmental health. About two-thirds of Norway’s nuclear assistance is focused on radiation protection and radioactive waste management, and assistance for these areas is expected to be about $20 million for 1995. One of Norway’s greatest concerns has been the dumping of nuclear waste in the Arctic seas. As a result, the Norwegian Parliament has approved a plan to address this problem and Norway has participated in several marine expeditions to assess radioactive contamination in the Kara and Barents seas. Sweden’s assistance in radiation protection and waste management has focused primarily on the Baltic countries (i.e., Estonia, Latvia, and Lithuania), Belarus, and Russia. Assistance projects have varied greatly and included studying radioactive contamination in the Arctic Ocean, providing equipment to detect radiation, environmental monitoring, installing emergency warning systems, and assessing nuclear waste management problems. A Swedish official estimated that Sweden has already spent around $10 million for these projects. Japan’s assistance includes efforts to avoid further dumping of radioactive waste in the Sea of Japan. As recently as 1993, Russia dumped a large volume of low-level liquid radioactive waste into the Sea of Japan from its fleet of nuclear-powered submarines based near Vladivostok. In response to the dumping of this waste, the Japanese government agreed to assist in underwriting a joint venture between a Russian firm and a Japanese firm to construct and operate a facility to store and process low-level liquid radioactive waste. As of August 1995, construction had not begun on this facility. In 1993, the International Atomic Energy Agency (IAEA), in conjunction with the United Nations Development Program, initiated a program to strengthen radiation protection and nuclear safety infrastructures as well as identify the types of assistance needed in the former Soviet Union. As of May 1995, IAEA had completed fact-finding missions to nine countries—Armenia, Belarus, Estonia, Latvia, Lithuania, Kazakhstan, Kyrgyzstan, Moldova, and Uzbekistan. IAEA plans to conduct missions to the remaining countries of the former Soviet Union. In October 1994, IAEA identified approximately $19 million to implement the assistance packages developed for these countries. As of May 1995, IAEA had provided some equipment under this program, such as radiation-monitoring devices, to four countries through emergency IAEA funding and some additional assistance through its regular technical cooperation program. IAEA is awaiting funding to implement the proposed assistance packages. In 1995, the United States agreed to provide $300,000 to support IAEA programs in Moldova and Uzbekistan. The funds will provide (1) a national system to notify, register, and license radiation sources; (2) training to ensure national capability to track the disposition of radiation sources; and (3) a mechanism to manage radioactive waste through training, technical assistance, and equipment. The European Union also provides nuclear assistance to the former Soviet Union. Although about 95 percent of the European Union’s funding for safety assistance is targeted to nuclear power reactors, the remaining 5 percent, or about $3.9 million, funds a variety of projects for radiation protection and radioactive waste management. These projects include (1) assessing the extent of radioactive waste contamination in the Barents Sea and the Sea of Japan; (2) supporting countries’ regulatory authorities; and (3) preparing site remediation plans at uranium mines. Study of radioactive waste contamination in the Arctic region and its effect on Alaska Studies of long-term radiation releases in Russia and Kazakhstan Assessment of radioactive waste contamination at a former Soviet naval nuclear submarine training facility at Paldiski in Estonia Development of technology on a cooperative basis with the Russians for radioactive waste cleanup Purchase of Plutonium-238 isotope with proceeds to be partly used to rehabilitate the radioactively contaminated areas of the Chelyabinsk plutonium production facility Research on radiation’s effects at the Chelyabinsk production facility Design study to upgrade and expand a low-level liquid radioactive waste-processing facility Extra budgetary contribution to the International Atomic Energy Agency for radiation protection activities in the former Soviet Union Assistance to Russian and Ukrainian regulating bodies in developing programs to govern the use of radioactive materials Note 1: Expenditures rounded to thousands of dollars. Note 2: This table does not include expenditures for U.S. assistance to improve methods of safeguarding nuclear materials at facilities in the former Soviet Union. Public Law 102-396 directed DOD to spend not less than $10 million to study, assess, and identify the disposal of nuclear waste by the former Soviet Union in the Arctic region. Subsequently, an additional $20 million has been earmarked for this research. DOD and the Office of Naval Research, under the oversight of the Defense Nuclear Agency, are responsible for addressing radioactive waste contamination of the Arctic region. Most of this effort has been devoted to research projects and expeditions in the Arctic seas to obtain water, sediment, and biological samples and tests for radiological contamination. For example, in 1993 five ships collected samples in the eastern Arctic near nuclear dump sites and the estuaries of major rivers and an additional five ships operated in the western Arctic near Alaska. According to a Navy official, the preliminary results of the testing does not indicate a radiation risk in the region of Alaska. DOD is continuing to support projects to monitor and evaluate the risks around the Arctic and North Pacific region from the former Soviet Union’s disposal and discharge of nuclear waste materials. Since 1992, DOD’s Armed Forces Radiobiology Research Institute has focused on several projects dealing with radioactive contamination in the former Soviet Union. The Institute’s mission is to conduct research in the field of radiobiology and related matters. The Institute has, among other things, (1) studied the long-term medical effects of radiation releases into Russia’s Techa River, (2) investigated the consequences of nuclear tests at Kazakhstan’s Semipalatinsk test site, and (3) developed documentaries on the radiation conditions at Krasnoyarsk and at the area where the Russian nuclear-powered and armed submarine, Komsomolets, sank in 1989. DOE and countries of the former Soviet Union are jointly conducting activities to develop technology in the areas of environmental restoration and waste management. Among other things, DOE seeks to (1) identify and access former Soviet Union technologies and technical information available at key former Soviet Union institutes that could help accelerate U.S. cleanup of nuclear waste and (2) increase U.S. and former Soviet Union opportunities in the private sector for environmental restoration and waste management. Key areas of interest for the United States are vitrification, waste separation technologies, and migration patterns of radioactive contamination. Program activities are arranged among DOE, its laboratories, and Russian and Ukrainian institutes. According to a DOE official, although the program is still in its early stages, some Russian technologies look promising. In January 1994, the United States and Russia signed a bilateral agreement to support joint cooperative research and the exchange of information on the health and environmental effects of radiation. A Joint Coordination Committee for Radiation Effects Research was established and DOE is the lead agency for the U.S. government. The first major research focuses on identifying the cumulative effects of radiation on workers and the population around the Chelyabinsk-65 region. To date, joint working groups have been established and workshops and seminars have been held both in Russia and the United States. The United States plans to send research teams into Russia in the latter part of 1995 to begin joint research activities with Russian scientists. In July 1994, the President of the United States issued a statement committing DOE to participate in a program of technical cooperation with the Republic of Estonia. The United States, as part of an international effort, is helping Estonia evaluate the environmental impacts of a former Soviet naval training facility at Paldiski. This facility houses two nuclear training reactors, one 70 megawatt and one 90 megawatt. The fuel from both reactors has been removed and transported back to Russia. DOE is assisting with several projects, including a decommissioning plan, an overall site characterization study, and training. All technological cooperation projects involve Estonian personnel, who will receive training so they can participate in all phases of the projects. In March 1995, a U.S. team of officials from DOE, Sandia National Laboratory, and Los Alamos National Laboratory made a site visit to describe the extent of contamination and prepare a plan for follow-on actions. According to a DOE official, the Paldiski project may benefit the U.S. cleanup program through its evaluation of new remediation technologies. In December 1992, DOE agreed to purchase up to 40 kilograms of plutonium-238 from Russia for civilian space power applications. As of March 31, 1995, DOE had purchased approximately 9 kilograms at a cost of approximately $11.8 million. Russia agreed to use the hard currency received from the sale to remediate the environment and rehabilitate workers and citizens in the Chelyabinsk region. In August 1994, DOE received a detailed accounting from Russia concerning how the funds were distributed from the sale of the first shipment, which totaled about $5.9 million. Of this amount, 38 percent (or $2.2 million) was paid as a federal profit tax. Twenty-five percent (or $918,000) of the remainder was transferred to the Chelyabinsk region’s budget to cover unspecified legislated social needs. The balance, less a banker’s commission fee, went as follows: approximately $2.6 million for improvements to waste storage and about $158,000 to support a health center, medical rehabilitation, and treatment of workers and citizens near Chelyabinsk-65. NRC is providing Russian and Ukrainian personnel with assistance to help establish regulatory controls over radioactive wastes, spent fuels, and materials. For example, assistance is being provided to Russia to help strengthen regulatory programs by providing technical expertise and on-the-job training. NRC believes that such technical exchanges and training help promote safety awareness in these countries and make them better able to improve nuclear safety themselves. EPA, with assistance from the State Department, has assessed the feasibility of the conceptual design to expand the waste processing facility operated by the Murmansk Shipping Company. This expansion includes handling the waste associated with decommissioning nuclear submarines. EPA is currently developing the engineering design and the expansion; upgrading the facility is expected to start in the fall of 1995. According to EPA, the expanded and upgraded processing capacity would provide Russia with an environmentally sound alternative to dumping nuclear waste into the Arctic Ocean. In September 1994, Russia announced that it intends to continue its present policy of voluntary commitment to a recent amendment to the London Convention, which bans the dumping of all other radioactive matter, including low-level radioactive waste into the seas. Russia’s waste-processing problems may also contribute to its reduced rate for deactivating and decommissioning nuclear submarines. Currently, over 100 nuclear-powered submarine hulls await final disposition. The initiative to expand the capacity to store nuclear waste is being coordinated with Norway. According to EPA, the program could cost about $3 million if the facility is constructed. The United States and Norway plan to share the cost equally. In December 1994, the U.S. Trade and Development Agency (TDA) signed two grants for feasibility studies on options to replace the power and steam that will be lost as a result of the shutdown of the three plutonium production reactors at Tomsk-7 and Krasnoyarsk-26. Under the terms of a June 1994 protocol signed by the Vice President of the United States and the Prime Minister of Russia, these three operating reactors should be shut down no later than the year 2000. After initially awarding a grant of $850,000 to Tomsk authorities to evaluate coal and natural gas as alternative fuels, TDA has increased the grant to $1,060,000 to ensure a broader assessment. In March 1995, the Tomsk authorities selected a U.S. firm to perform the study. TDA also provided a $780,000 grant to the municipality of Krasnoyarsk-26 to primarily evaluate the feasibility of two options involving coal as the alternative fuel. In June 1995, the same U.S. firm was selected to undertake the study. Both studies began in August 1995. To determine the number of nuclear facilities in the countries of the former Soviet Union, we developed an inventory from several publicly available documents. We obtained data from the Monterey Institute of International Studies (Monterey, California), the Natural Resources Defense Council, the International Atomic Energy Agency (Vienna, Austria), and various U.S. government agencies. In most instances, the nuclear facilities were listed in more than one source. Additionally, we sought to corroborate the information through discussions with officials from U.S., international, and private organizations. We met with or obtained information from officials from the former Soviet Union. For example, we had discussions and obtained information from key Russian representatives from Gosatomnadzor (GAN), the regulatory agency, and the Ministry of Atomic Energy (MINATOM). We also met with an official from Russia’s Permanent Mission to the International Organizations in Vienna, Austria. Information pertaining to research reactors in the former Soviet Union was obtained from the Kurchatov Institute of Atomic Energy, which is Russia’s leading research and development institution in the field of nuclear energy. We discussed the condition of Kazakhstan’s nuclear facilities with the Deputy Director of Kazakhstan’s Institute for Strategic Studies. We also reviewed pertinent information about facilities in countries of the former Soviet Union that had been prepared in response to an international forum on nuclear safety sponsored by the International Atomic Energy Agency. To address facility safety and environmental issues, we reviewed available public information and had discussions with nuclear safety experts primarily from the Department of Energy, several national laboratories, and nuclear weapons facilities. We met with or had discussions with numerous officials who had recently visited facilities at Tomsk and Krasnoyarsk. In addition, many of these same officials had participated in workshops on noncivil nuclear power reactor safety with their Russian counterparts. Specifically, we had discussions with officials from the following DOE national laboratories: Los Alamos (Los Alamos, New Mexico), Sandia (Albuquerque, New Mexico), Idaho National Engineering Laboratory (Idaho Falls, Idaho), and Lawrence Livermore Laboratory (Livermore, California). We also had discussions with officials from DOE’s Savannah River Site (Aiken, South Carolina) as well as officials from the Pacific Northwest Laboratory (Richland, Washington), who had developed considerable information about Russia’s plutonium production reactors and problems with environmental waste contamination. We reviewed available documentation, including trip reports, prepared by DOE and national laboratory officials who had recently visited Russian facilities. To determine the amount and type of assistance being planned or provided, we obtained pertinent data from various U.S. government agencies that have been providing assistance or are knowledgeable about assistance to the former Soviet Union. Specifically, we obtained data from the following U.S. departments and agencies: Department of Defense’s Office of Naval Research, Department of Energy, Department of State, Environmental Protection Agency, Trade and Development Agency, and the Nuclear Regulatory Commission. We did not independently verify the accuracy of the data provided by these agencies. We discussed nuclear safety assistance issues with representatives from several international organizations and foreign governments. We met with officials at IAEA, the European Union (in Brussels, Belgium), and the Organization for Cooperation and Development’s (OECD) Nuclear Energy Agency (in Paris, France). Several IAEA officials had recently visited eight former Soviet Union republics and had been to various facilities in the past 2 years, including Tomsk. We attended a May 1995 workshop at the IAEA on nuclear waste issues in Russia and discussed assistance efforts with representatives from Sweden, Norway, Finland, and Japan. We reviewed various databases to identify international safety assistance, including OECD’s Center for Cooperation and Economies in Transition database as well as data from the G-24 Nuclear Safety Assistance Coordination Center in Brussels, Belgium. The following are GAO’s comments on NRC’s letter dated August 24, 1995. 1. While we recognize that safeguarding nuclear material is an important issue, our report focused primarily on the operational safety of nuclear facilities in countries of the former Soviet Union. Operational safety of nuclear facilities and safeguarding materials are generally considered distinct activities. We plan to discuss issues pertaining to U.S. assistance to improve nuclear material controls at facilities in the former Soviet Union in a forthcoming GAO report. 2. NRC commented that we should review a July 1995 Russian presidential decree that changed the responsibilities of the Russian nuclear regulatory agency. In response, we contacted the Acting Deputy Chairman of Gosatomnadzor (GAN) (the Russian nuclear regulatory agency) who informed us that the Russian President’s decree had limited GAN’s “sphere of activity” particularly regarding the manufacturing, testing, and use of nuclear weapons. These activities are within the jurisdiction of Russia’s Ministry of Defense. He noted, however, that all Russian Ministry of Atomic Energy installations associated with the production of nuclear material are subject to GAN’s regulatory oversight. This includes inspection of the operating plutonium production reactors at Tomsk and Krasnoyarsk as well as associated reprocessing facilities. GAN’s Acting Deputy Chairman stressed the value of nuclear legislation as a means to improve nuclear safety in Russia. 3. The reference to enrichment facilities in Ukraine has been deleted from our report. 4. The report has been updated to reflect this new information. 5. The report has been changed to reflect this clarification. Pamela J. Timmerman, Evaluator-in-Charge Lauren V.A. Waters, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on U.S. and international efforts to address nuclear safety and environmental problems in the former Soviet Union. GAO found that: (1) the former Soviet Union has at least 221 nuclear facilities operating, 99 of which are located in Russia; (2) as many as 20,000 organizations throughout the former Soviet Union are using various types of radiation for medicine, industry, and research; (3) aging facilities and equipment, inadequate technology, a lack of commitment to safety, the absence of independent nuclear regulatory bodies, and a lack of funding are contributing to unsafe conditions in the former Soviet Union; (4) efforts are under way to study the radiological effects of operating nuclear facilities and nuclear-powered submarines; and (6) the United States has committed $55 million to support programs focusing on the environmental and health effects caused by the production of nuclear weapons in the former Soviet Union. |
The central Appalachian coal region plays a large part in supplying the country with its energy needs. Specifically, in 2008, West Virginia and Kentucky were the second- and third-largest coal-producing states in the nation—behind Wyoming—and accounted for more than 76 percent of the coal produced from surface mines in Appalachia. West Virginia produced about 69 million tons of coal from surface mines, while Kentucky produced about 51 million tons. Virginia produced close to 9 million tons and Tennessee less than 2 million tons from surface mines in 2008, respectively. SMCRA requires mine operators to obtain a permit before starting to mine. The permit process requires operators to submit detailed plans describing the extent of proposed mining operations, how reclamation on the mine site will be achieved, and the estimated per-acre cost of reclamation. In reclaiming the mine site, operators must comply with regulatory standards that govern, among other things, how the reclaimed area is regraded, replanting of the site, and the quality of water flowing from the site. (See app. II for selected details about these key reclamation standards.) In general, an operator must reclaim the land to a use it was capable of supporting before mining or an alternative post-mining land use that the regulatory authority deems higher or better than the pre-mining land use. Additionally, although the operator is generally required to redeposit spoil on the mine site so that it approximates the original contour of the site, the operator may in certain circumstances receive a variance to this general requirement and leave the land flat or gently rolling. In addition, a mountaintop removal operation is one that, by definition, will not restore the area to its approximate original contour. However, only specific types of post-mining land uses—including industrial, commercial, agricultural, residential, or public uses—are allowed for mountaintop removal operations. SMCRA requires the operator to submit a bond in an amount sufficient to ensure that adequate funds will be available for the regulatory authority— either OSM or a state with primacy—to complete the reclamation if the operator does not do so. The bond provisions of SMCRA apply generally to all types of coal mines and do not include any requirements that are specific to mines with valley fills. However, the bond amount for a particular site cannot be less than $10,000 and must also be sufficient to ensure the completion of the reclamation plan for that particular site if the work had to be completed by the regulatory authority in the event of forfeiture. In this report, we refer to a bond that is equal to the expected cost to reclaim the entire site as a “full-cost bond.” OSM has prepared guidance for mine operators on how to calculate their bond amounts to capture the likely costs of reclamation. Bond amounts can be adjusted as the size of the permit area or the projected cost of reclamation changes. When all reclamation standards identified in SMCRA and the operator’s permit—including compliance with water quality standards—have been met, the bond is completely “released” to the operator. The OSM regulations implementing SMCRA recognize three major types of bonds: corporate surety bonds, collateral bonds, and self-bonds. A surety bond is a bond in which a surety company guarantees the performance of the permittee’s obligation to reclaim the mine site. If the mining company does not reclaim the site, the surety company must pay the bond amount to the regulatory authority or the regulatory authority may allow the surety company to perform the reclamation instead of paying the bond amount. Collateral bonds include cash; certificates of deposit; liens on real estate; letters of credit; federal, state, or municipal bonds; and investment-grade securities deposited directly with the regulatory authority. A self-bond is a bond in which the permittee guarantees its own performance with or without separate surety. Self-bonds are available only to operators who meet certain financial conditions. To remain qualified for self-bonding, operators must, among other requirements, maintain a net worth of at least $10 million, possess fixed assets in the United States of at least $20 million, and have an “A” or higher bond rating. SMCRA also authorizes states to enact an OSM-approved alternative to a full-cost bonding system as long the alternative achieves the same objectives. One kind of alternative bonding system is known as a “bond pool.” Under this type of system, the operator may post a bond—e.g., a surety bond or collateral bond—for an amount determined by multiplying the number of acres in the permit area by a per-acre assessment. The per- acre assessment may vary depending on the site-specific characteristics of the planned mining operation and the operator’s history of compliance with state regulations. However, the per-acre bond amount may be less than the estimated cost of reclamation. To supplement the per-acre bond, the operator generally must pay a fee for each ton of mined coal and may also be required to pay other types of fees. Funds are placed within a pool and can be used to reclaim sites that participants in the alternative bonding system do not reclaim. Under OSM regulations, all alternative bonding systems must provide a substantial economic incentive for the operator to comply with reclamation requirements and must ensure that the regulatory authority has adequate resources to complete the reclamation plan for any sites that may be in default at any time. Once bonds have been completely released to a mine operator, the relevant regulatory authority may terminate its jurisdiction under SMCRA. However, the regulatory authority may also revoke an operator’s permit if the operator fails to comply with the permit’s provisions. Under those circumstances, the operator may forfeit the bond to the regulatory authority. The regulatory authority then becomes responsible for reclaiming the land to the reclamation standards found in the operator’s permit. If the amount forfeited is insufficient to pay for the full cost of reclamation, the operator remains liable for remaining costs. The regulatory authority may complete reclamation and may sue the operator to recover additional expenses. Failure to complete reclamation has other serious consequences for mine operators—SMCRA prohibits applicants from obtaining future SMCRA permits if they have unabated violations of law or regulations applicable to surface mining; state regulations specifically note that bond forfeitures based on violations that are not subsequently corrected disqualify operators from obtaining future permits. The objective of the Clean Water Act is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. Section 404 of the act allows the Corps to issue permits for the discharge of material, including fill material, into waters of the United States at specified disposal sites. Such permits are needed for the construction of a valley fill. Section 404(c) authorizes EPA to deny or restrict the use of any disposal site where it finds that the discharge will have unacceptable adverse effects. Mining companies may be able to construct valley fills under one of two types of permits issued by the Corps. First, the mining company may be authorized to construct a valley fill under the Corps’ “nationwide permit” for surface coal mining. A nationwide permit provides coverage for substantially similar activities that are expected to cause only minimal adverse environmental effects on an individual and cumulative basis. Second, the Corps may issue an “individual permit.” Individual permits are issued on a case-by-case basis for activities that are expected to have more than a minimal impact. Before issuing an individual permit, the Corps must evaluate the operator’s proposed activity for several factors, including, but not limited to, its effects on environmental values—such as fish, wildlife, and water quality—and safety issues, as well as any proposed mitigation for the project. Under guidelines prepared by the EPA Administrator and the Secretary of the Army acting through the Chief of Engineers, pursuant to section 404, the Corps may issue permits to discharge fill material, if, at a minimum, compliance with the guidelines is demonstrated. One aspect of compliance is that the discharge does not cause or contribute to “significant degradation” of waters of the United States. Under these guidelines, an operator would not be permitted to discharge fill materials into waters of the United States if there is a practicable alternative to such a discharge and would be required to minimize discharges that cannot be avoided. If such discharges are unavoidable, the Corps can require as a condition of the permit that the operator compensate for the loss or degradation of regulated waters. In the case of valley fills that bury streams, such compensatory mitigation could involve (1) creating a new stream, (2) enhancing a degraded stream, or (3) preserving an existing stream. The mitigation work may be done within the permitted area (on-site) or outside of the permitted area (off-site). Mitigation may be performed by the mine operator or a third party, such as a public or nonprofit entity, under agreement with the Corps. The Corps’ Clean Water Act implementing regulations and related policies authorize the Corps’ district engineers to require financial assurances when approving section 404 permits in order to ensure a high level of confidence that compensatory mitigation will be successfully completed. The Corps allows financial assurances to be in the form of bonds, escrow accounts, casualty insurance, letters of credit, legislative appropriations for government sponsored projects, or other appropriate instruments, subject to the approval of the district engineer. If assurances are required, district engineers are to determine the amount based on factors such as the size and complexity of the compensatory mitigation project, the likelihood of success, the past performance of the project sponsor, and any other factors they deem appropriate. Also, Corps district engineers must release financial assurances once they determine that the operator has demonstrated that a compensatory mitigation project has successfully met its performance standards. Typically, the monitoring period to assess the success of a compensatory mitigation project is 5 years but this period may be extended for projects that take longer, such as stream restoration. The Corps’ authority to require financial assurances to ensure compensatory mitigation differs from the authority that mining agencies have under SMCRA to require bonds for mine reclamation. While SMCRA explicitly calls for mining agencies to require all operators to provide bonds, the Corps’ Clean Water Act regulations authorize district engineers to decide whether financial assurances are necessary on a permit-by-permit basis. The district engineer may determine that financial assurances are not necessary for a specific project if an alternate mechanism is available to ensure a high level of confidence that the compensatory mitigation will be provided and maintained. While SMCRA authorizes mining agencies to directly hold and use financial assurances to ensure the required reclamation is completed if the operator defaults on its reclamation obligations, the Corps does not have statutory authority under the Clean Water Act to do so. In light of that limitation, the Corps’ regulations and policies stipulate that if a district engineer does choose to require financial assurances, those assurances must be payable to a third party—such as a governmental or nongovernmental environmental management organization—that will agree to hold the funds and complete the mitigation in accordance with the Corps’ instructions if the operator defaults on its obligations. In addition to needing a Clean Water Act section 404 permit to construct a valley fill, mine operators need to obtain a National Pollutant Discharge Elimination System, or section 402, permit if they discharge pollutants from industrial point sources. Point sources are discrete conveyances such as pipes. Section 402 permits, generally administered by the states under EPA-approved programs, include limits on the amount of pollutants—such as suspended solids—that mines can directly discharge into bodies of water. Surface coal mines contain sediment ponds and drainage ditches that collect runoff from all disturbed areas, including water from the base or perimeter of valley fills or other locations that may then flow into a stream. These flows may need to comply with point source pollutant limitations specified in a section 402 permit. Section 402 permits also require that mine operators submit periodic discharge monitoring reports to the regulatory authority, which is typically a state agency. A mine operator cannot obtain the release of its SMCRA bond if the land is contributing suspended solids and other pollutants, in excess of applicable state effluent limitations, to stream flow or runoff outside the SMCRA permit approved area. The regulatory authorities in the four states we reviewed have collectively authorized thousands of valley fills since the enactment of SMCRA in 1977. Although the total number of valley fills approved since 1977 is uncertain, data we collected from OSM, Kentucky, Virginia, and West Virginia show that at least 2,343 valley fills have been authorized since January 2000. Specifically, Kentucky authorized 1,488 valley fills through July 30, 2008; Tennessee authorized 17 valley fills through December 31, 2008; Virginia authorized 327 valley fills through August 17, 2009; and West Virginia authorized 511 valley fills through July 30, 2008. Notably, approval of a valley fill does not necessarily mean that it will be constructed. For example, according to Virginia state officials, of the 327 valley fills approved between January 2000 and August 2009, 97 were completed, 103 were under construction, 90 were not started, and 37 were “not needed and/or not constructed.” While OSM and state mining agencies have been approving SMCRA permits with valley fills since the late 1970s, the Corps did not begin to consistently require section 404 permits for valley fills until the spring of 2002, when the Corps and EPA jointly issued regulations revising the definition of fill material. Prior to this revision, the Corps interpreted excess spoil to be a “waste” regulated under section 402 of the Clean Water Act rather than a fill material regulated under section 404. The Corps could not readily provide us with data on the total number of section 404 permits it has issued for valley fills, the number of operators it has required to complete mitigation for valley fills, the types of mitigation called for, or the status of mitigation projects. The Corps did provide us electronic data showing that in the four states we reviewed it approved 378 Nationwide Permit 21 permits from March 2002 through December 2008 and 171 individual permits for surface coal mining operations from March 2002 through September 2009. However, its database does not contain information on how many of those permits were for valley fills. In addition, its electronic database indicated that only 57 of the nationwide permits required compensatory mitigation projects; Corps officials believed that number to be understated because the database is not complete. Although not captured in its electronic database, the information on valley fills and required compensatory mitigation projects is more completely documented in the Corps’ paper permit files, according to agency officials. The four states in our review use different approaches to fulfill SMCRA’s requirement that mine operators provide adequate financial assurances for completing reclamation. These states primarily vary in whether they require mine operators to fulfill their financial assurance obligation strictly through a full-cost bond or whether they allow operators to use alternative bonding systems that combine bonds, taxes on coal production, and other sources of funding. The Corps has not used its discretionary authority to require surface coal mine operators in the four states to provide financial assurances for mitigation work required as part of their section 404 permit, according to Corps officials. Furthermore, Corps officials said the Corps has relied on other permit conditions for assurance that mitigation will be satisfactorily completed. The three states with primacy that we examined—West Virginia, Virginia, and Kentucky—have financial assurance programs that differ from each other and from the federal program that OSM administers in Tennessee. Each of the three states has received approval from OSM to use an alternative bonding system, although they do so to varying degrees. West Virginia requires that all operators participate in a bond pool. Virginia relies primarily on a bond pool but also uses a full-cost bonding system. Kentucky relies primarily on a full-cost bonding system but also uses a bond pool. Tennessee uses a full-cost bonding system. All mine operators must participate in the state’s alternative bond system. The state has limited the site-specific per-acre bond to between $1,000 and $5,000. The state also collects a tax on each ton of coal produced. The current tax is 14.4 cents per ton of clean coal produced. The state deposits those funds into a Special Reclamation Fund and a Special Reclamation Water Trust Fund. As of June 2008, the combined balance for the two funds was $46.9 million. The state can use these funds to reclaim lands that were permitted and abandoned after August 3, 1977, for which there is not enough bond amount to cover reclamation. The West Virginia legislature created an advisory council in 2001 to ensure the effective, efficient, and financially stable operation of the Special Reclamation Fund. The advisory council is required to report to the legislature every year on the financial condition of the fund. Furthermore, the West Virginia Department of Environmental Protection is required to conduct formal actuarial studies every 2 years and conduct informal reviews annually on the Special Reclamation Fund and Special Reclamation Water Trust Fund. In January 2009, recognizing that the tax rate was scheduled to drop from 14.4 cents per ton to 7 cents later that year, the advisory council recommended that the state legislature adjust the tax rate to 13 cents per ton for at least a 5-year period or provide for additional funding needed to ensure solvency. While the council concluded that the fund was solvent as of January 2009, it stated that, based upon projections in the 2008 actuarial study and with only the known revenue sources at that time, the fund balance would be negative by 2015. In April 2009, the state legislature set the tax rate at 14.4 cents per ton, effective July 1, 2009; called for a review of the tax every 2 years to determine whether it should be continued; and stipulated that the tax could not be reduced until the funds have sufficient monies to carry out required reclamation. Virginia offers the option of a bond pool to operators who meet eligibility criteria; other operators must post a full-cost bond. As of October 2009, the majority of active surface mine permits were covered by the bond pool. According to officials from the Virginia Department of Mines, Minerals and Energy, as of October 13, 2009, there were 148 active surface mine permits in the bond pool and 18 surface mines covered by full-cost bonding. The total bonded amount in the bond pool was about $143 million, while the total for full-cost bonding was about $14 million. An operator must be able to demonstrate at least 3 consecutive years of compliance under Virginia’s Coal Surface Mining and Coal Reclamation Act or any other comparable state or federal act to participate in the bond pool. Once in the pool, an operator cannot opt out. Operators in the pool must pay an entrance fee of $1,000 when the total balance of the pool is determined to be greater than $2 million; the entrance fee increases to $5,000 if the total fund balance falls below $1.75 million, and remains at $5,000 until the balance again exceeds $2 million. A fee of $1,000 is required of all operators in the pool when the permit is renewed. Participants in the bond pool also furnish a bond of $1,500 or $3,000 per acre, depending on when the permit was issued. Regardless of acreage, bonds for operations entering the fund on or after July 1, 1991, must be at least $100,000. If forfeiture occurs, the state may, after using the available bond monies, use the bond pool funds as necessary to complete reclamation liabilities for the permit area. To oversee the bond pool’s general operations, the Virginia legislature created a reclamation fund advisory board that meets at least twice each year to make recommendations to the director of the Department of Mines, Minerals and Energy. The advisory board must also report to the director and to the governor on the pool’s financial status and recommend to the director any new or amended regulations for administering or operating the pool. According to the department, the advisory board concluded in August 2009 that the fund was solvent. Kentucky offers mine operators who meet eligibility criteria the option of participating in a bond pool, but the vast majority of operators provide full-cost bonds. According to the most recently available state data, as of May 2007, only 65 permits were covered by the bond pool. As of June 30, 2009, OSM data showed that there were a total of 893 permits for surface mining in Kentucky. To participate in the bond pool, state regulations require that an operator have an acceptable or better history of compliance with the state’s mining regulations, among other criteria. The cost of membership ranges from $1,000 to $2,500 and depends on a member’s performance record. In addition, participants must obtain a bond that ranges from $500 to $2,000 per acre, depending on the performance rating of the member. Finally, members pay a 5 cent per-ton fee for surface-mined coal. When the Kentucky Bond Pool Fund reaches $17.4 million, the assessment of tonnage fees is to be suspended for all members who have made 36 or more monthly payments to the fund. If the fund level drops to $12.3 million, the tonnage fee requirement will be reinstated for all members. The funds in the pool are available only for reclamation costs at sites operated by members of the pool. Bond pool members’ per-acre bonds are fully released at the completion of the initial phase of reclamation. After the initial phase, a permit is covered only by the bond pool. In Kentucky, the law requires a review of the actuarial soundness of the bond pool every 3 years. The last Kentucky actuarial study, which evaluated the pool as of May 31, 2007, concluded that the fund, with a balance of $19.7 million, was solvent and that it had been building its assets at a faster pace than the increase in its outstanding liabilities. As an indication of the pool’s financial soundness, the study noted, the pool could survive the failure of its two largest members. The study concluded that the fund’s soundness had improved because its liability was more evenly spread among its members. The study recommended that the state continue the 5 cent per ton fee for surface coal mines and limit the maximum amount of bond funds held for any member operator to $6 million, or about 30 percent of the total bond pool. According to the state’s bond pool administrator, the pool has continued the 5 cent per ton fee as recommended. He also said there has never been a member of the bond pool to have bonds in excess of $4 to $5 million because the program primarily offers bonding assistance to small coal operators. Tennessee is the only one of the four states we reviewed to use a full-cost bond system exclusively. As of September 30, 2008, the state had 15 active surface coal mines. OSM held bonds totaling about $17.8 million for those 15 mines. In 2007 OSM revised its regulations for Tennessee to address concerns that full-cost bonds were not adequate to handle the problem of post-mining acid- or toxic-mine drainage. Specifically, the new regulations provide a mechanism in Tennessee to allow operators to establish a trust fund or annuity to cover the cost of postmining pollution discharges in lieu of a performance bond. OSM’s policy in Tennessee is to assume that post-mining pollution discharges will need to be treated for at least 75 years, barring evidence to the contrary. When OSM established the trust fund and annuity options in Tennessee, it stated that a system that provides an income stream may be better suited than full-cost bonds to ensure the long-term treatment of postmining pollution discharges. According to OSM, surety bonds, the most common form of a full-cost bond, are especially ill-suited for this purpose because surety companies normally do not underwrite a bond when there is no expectation of release of liability. The addition of this authority in Tennessee builds upon the experience of Pennsylvania, which had already established a process for accepting trust funds or annuities to pay for postmining discharges. In October 2009, the acting director of OSM announced that OSM was making bonding a national priority of its 2010 annual evaluation of state mining programs. Specifically, the acting director instructed regional and field office directors to evaluate how states are complying with their own regulations for determining required bond amounts. The instructions further stated that the evaluations should assess whether (1) the states’ methods of determining bond amounts ensure that adequate funds are available to the state in the event that the operator forfeits its bond, (2) the bond calculation methods include a mechanism to adjust bond amounts or provide other financial assurance to cover the cost of unanticipated long- term postmining pollutional discharges that develop after permit approval, and (3) the state re-evaluates the bond amount each time a permit is revised or renewed. According to an OSM official in the Appalachian Regional Office, OSM chose bonding as a national priority after surveying managers and staff for their oversight priorities. OSM’s November 2009 work plan calls for OSM to examine a sample of forfeited sites to determine whether adequate bonds were posted and whether the sites were reclaimed as proposed in their reclamation plans. For those sites covered in part or in total by a full-cost bond, OSM plans to use its directive on bond calculation as a basis for evaluating the adequacy of bonds. OSM plans to finalize a report on its findings by September 1, 2010. In addition, OSM announced in November 2009 that it was considering rulemaking to address concerns related to bonding programs. One of OSM’s concerns is that mine operators do not always apply for bond release in a timely manner, particularly for phases II and III. OSM noted that there is no legal requirement that operators apply for bond release in a timely manner and identified several options for improving timeliness. Another concern of OSM was that the data needed to assess the success of reclamation has not been adequate. To improve data quality, OSM is considering requiring operators to submit an annual status report to the regulatory authority with information on areas that are permitted, bonded, disturbed, backfilled and graded, newly planted, and that have reached one or more of the phases of bond release. While OSM has made bonding an oversight priority for 2010 and is considering related rulemaking options, it has reported on various aspects of state bonding programs in prior annual evaluations. For example, in its 2009 evaluation year report on West Virginia, OSM reported that it did not appear that the state was meeting requirements for inspections at bond forfeiture sites. OSM estimated that the state had completed about 55 percent of the required inspections at bond forfeiture sites. In its 2009 report on Virginia, OSM reported that it had reviewed a sample of operators that applied for phase III bond release during the year and found that on-the-ground reclamation had been successful. In its 2009 report on Kentucky, OSM provided information on the number of forfeited permits at which reclamation was complete or underway. OSM has reported on the states’ bonding programs in other evaluations, but it was not within the scope of our review to assess the effectiveness of those programs. The Corps has not required operators with section 404 permits for mines with valley fills to provide financial assurances to ensure mitigation is completed, according to officials in the five district offices that approve permits in the four states we reviewed. Corps officials said they have not required financial assurances for the following reasons: The agency does not have statutory authority to directly hold and use performance bonds to ensure that mitigation is completed. Officials said that if they did require financial assurances, an operator would need to identify a third party to hold the assurances and complete the mitigation if the operator does not. Some Corps officials said, however, that few third parties with the ability to conduct stream restoration have been available. The mine operators have had sufficient capital to complete required mitigation or have demonstrated their ability to successfully complete other mitigation work. It is assumed that mine operators will comply with compensatory mitigation requirements without financial assurances. The operators’ approved mitigation projects are not yet complete and therefore the Corps has no evidence that these projects will be unsuccessful. Corps officials told us the Corps has relied on mechanisms other than financial assurances to ensure that mitigation associated with valley fill permits will be satisfactorily completed. Specifically, one mechanism may require the operator, under the terms of its permit, to prepare an adaptive management plan. Such a plan would identify alternative mitigation actions the operator would take in the event that elements of the original plan did not succeed. In addition to an adaptive management plan, the Corps may require a permit to include a contingency plan that identifies acceptable alternative compensatory mitigation should the approved mitigation project fail. A contingency plan could require that the operator purchase mitigation credits from an in-lieu-fee program if the planned mitigation does not succeed. Some Corps officials also told us that the SMCRA bond could be used to cover the mitigation required under section 404, but others disagreed. According to a Norfolk, Virginia district Corps official, when off-site mitigation is part of the 404 permit, the Virginia state mining agency will expand the area covered by the SMCRA bond beyond the mine area to include land on which the 404 mitigation is to be done. The Norfolk, Virginia district official stated that this practice is consistent with the Corps’ 2004 mitigation policy for surface mining operations. This policy encourages district engineers to coordinate with state or OSM staff and the mining operators to incorporate required SMCRA features—such as drainage ditches and sediment ponds—into section 404 compensatory mitigation plans. On the other hand, Corps officials in Huntington, West Virginia, said they consider the SMCRA bond as a financial assurance only for mitigation projects done on the surface mine site. In further contrast, a Corps district official we spoke with in Louisville, Kentucky, does not consider the SMCRA bond to be an assurance for on-site section 404 mitigation because the goals of reclamation and mitigation are not always the same. According to Corps headquarters officials, the district offices have the discretion to decide if SMCRA mitigation projects qualify as section 404 mitigation. Officials from OSM’s Appalachian region and field offices agreed that on-site section 404 compensatory mitigation can be incorporated as a special condition of the surface mining reclamation plan in a SMCRA permit. OSM, the states’ mining or environmental agencies, EPA, and the Corps are not required to monitor former mountaintop mines with valley fills for long-term environmental degradation after reclamation and mitigation are complete and financial assurances have been released. While the agencies are not required to collect post-reclamation monitoring data, several have analyzed conditions near reclaimed mine sites with valley fills and found that (1) reforestation efforts at some reclaimed surface coal mine sites needed improvement, (2) some surface coal mine sites have contaminated streams and harmed aquatic organisms, (3) a link exists between valley fills and changes to water flow, and (4) mine operators have not always returned mine sites to their approximate original contour when required to do so under SMCRA. Several federal and state agencies have taken some actions to respond to these findings. Federal and state agencies in the four Appalachian states we reviewed are not required by SMCRA or the Clean Water Act to monitor mine sites with valley fills or associated mitigation sites after they have determined that reclamation and mitigation are complete. Most officials we interviewed at the federal and state mining and environmental protection agencies in the four states we reviewed said post-reclamation or post-mitigation monitoring is not needed, with officials from several agencies explaining that the laws or their implementing regulations require adequate monitoring before an agency can determine that either reclamation or mitigation is complete. For example, in order to obtain bond release under SMCRA, mine operators must be able to demonstrate to agency inspectors that revegetation, water quality, and other standards are being met. Generally, this period is 5 years after the last reclamation activity. Officials from EPA and the state departments of environmental protection also told us that they do not monitor mine sites for water pollution discharges after they have been reclaimed. In order to achieve bond release, according to OSM and state officials, the operator typically removes and reclaims all sediment ponds that are subject to section 402 discharge permits and must demonstrate that discharge limits have not been exceeded for a year. Therefore, once the bond has been released, officials would no longer have a reason to monitor the site for section 402 permit violations. Officials from two Corps districts said that the Corps’ requirement that the operator monitor and report on mitigation sites for 5 to 10 years before the Corps will determine that the mitigation is complete is sufficient. In addition, officials from three Corps district offices told us that because they did not begin to consistently issue section 404 permits for valley fills until 2002, few mitigation projects have been in place long enough to have been completed and thus are not available for post-mitigation review. While the agency officials we spoke with generally said that additional monitoring is not necessary after reclamation and mitigation are complete, there were some that said that additional monitoring is needed to evaluate the long-term effectiveness of those activities. Specifically, officials from EPA’s Office of Water and region 3 and 4 offices said that they believe monitoring has not been adequate to document the success of section 404 mitigation projects. Officials from the U.S. Geological Survey Water Science Center in West Virginia told us that additional long-term monitoring is needed to collect data on a range of issues, including water contamination, flooding, and land stability. Several agencies have conducted or funded studies that show some evidence of the effect of environmental changes associated with mountaintop mines with valley fills after reclamation. The majority of the studies that agencies referred us to were done as part of the 2003 draft multiagency programmatic environmental impact statement (PEIS) on mountaintop mining and valley fills. Among the concerns raised by these studies were reforestation efforts, effects of mining on aquatic organisms, relations between valley fills and floods, and reclamation to the approximate original contour. Several agencies have taken actions in response to some of these concerns, such as promoting new reforestation methods. OSM and state mining agencies have found that reclamation efforts on mountaintop mines and valley fill sites could be improved to yield more successful reforestation. For example, the 2003 draft PEIS noted that previously forested mountaintop mine sites were more likely to have been revegetated with grasses than with trees. One PEIS study compared revegetation at a sample of southern West Virginia mountaintop removal and valley fill mining sites with adjacent unmined sites; the revegetation had occurred from 8 to 26 years prior to the study, and therefore the operators probably had their bonds released. According to the study, poor vegetation development with time was typical of the reclaimed sites, with significantly lower tree diversity on the mined sites than in adjacent forests. The study found that its data and other published studies supported the conclusion that mining reclamation procedures limit the overall ecological health and inhibit the desired growth of native tree and shrub species on the site. With regard to the study in the draft PEIS, OSM officials told us that SMCRA permits do not always call for reforestation. For example, a mine site might be approved for reclamation as pasture or commercial development. Therefore, reclaimed mine sites may not need to become forested to meet SMRCA requirements. In June 2008, OSM issued a policy directive to promote the reestablishment of forest land where existing forests had been removed by surface mining. In its directive, and in related advisory documents, OSM noted that past reclamation and revegetation efforts had not been fully successful and had led to low rates of tree survival and growth, forest fragmentation, reduced carbon sequestration, loss of wildlife habitat and forest products, and increased potential for floods. To reverse this trend, the directive encourages, but does not require, the widespread and routine planting of native, high-value trees that should help restore the uses and ecosystems provided by forests prior to mining. The directive also encourages mine operators to avoid compacting the top 4 feet of soil on reclaimed mine sites in order to promote water infiltration and tree growth. The OSM directive is part of a broader effort known as the Appalachian Regional Reforestation Initiative—formed in 2004 by federal and state agencies, the coal industry, environmental organizations, and others in the Appalachian region—to promote improved reforestation techniques on surface-mined lands. Officials from Kentucky, Virginia, and West Virginia told us that the OSM initiative built upon changes in reforestation policy or regulation at the state level. According to an OSM Appalachian Region official, while he believes that the use of these techniques is increasing, he also said that reliable data showing the acres of mined land planted using these techniques are not available. According to this official, OSM is working with participants in the reforestation initiative on methods for assessing success. According to the 2003 draft PEIS, approximately 1,200 miles of headwater streams within the boundaries of mining permits (or 2 percent of the streams in the central Appalachian study area) were directly affected by mountaintop mining and valley fills. For example, streams below valley fills were characterized by contaminants discharged from mine sites as well as less diverse and more pollutant-tolerant aquatic invertebrates and fish. Furthermore, in some locations, streams where mountaintop mines and valley fills exist, concentrations of selenium, a potentially toxic element that accumulates in aquatic organisms, were found to exceed standards. In 2008, EPA scientists reported that aquatic life downstream from 27 active and reclaimed mountaintop mines with valley fills showed subtle to severe effects compared with aquatic life downstream in similar, but unmined, West Virginia watersheds. More specifically, the authors compared three reclaimed mine sites with three unmined sites over a period of 6 to 7 years. According to the study, two of the three reclaimed mine sites showed further degradation of aquatic organisms over the period while the third showed some improvement, but in each case the three reclaimed sites were impaired compared with the unmined sites. EPA has cited the 2008 study, as well as other analyses, in recent actions that it has taken on section 404 permits for valley fills. In September 2009, EPA announced its plan for the “enhanced coordinated review” of 79 section 404 permit applications for surface mines with valley fills pending with the Corps. In making its announcement, EPA stated, among other things, that on the basis of the scientific literature, its field experience, and available project information, it was concerned that the mitigation proposed may not be sufficient to replace lost aquatic resources. On the other hand, Corps officials told us that they believe that the scientific literature EPA referred to is not complete; specifically, that it lacks adequate site-specific analysis. Also in September 2009, EPA asked the Corps to reconsider a section 404 permit that it issued in 2007 for the Spruce No. 1 mine in West Virginia with planned valley fills that, if built, would fill more than 8 miles of headwater streams. EPA expressed concerns that the Corps decision to issue the permit did not reflect studies showing that impairments from surface coal mining are persistent over time and cannot be easily mitigated or removed. EPA also raised specific concerns about the mitigation plan in the issued permit, including the planned use of drainage ditches—such as might be constructed at the perimeter of valley fills—as compensatory stream channels. EPA said that it has consistently objected to the use of these ditches as compensation for lost headwater stream channels and requested that the Corps re-evaluate the mitigation plan to ensure that it achieves functional replacement of lost aquatic resources. On September 30, 2009, the Corps’ district engineer in Huntington, West Virginia, responded to EPA, noting that the decision to issue the permit had followed extensive coordination with EPA for nearly 10 years concerning the project’s scope, alternatives, and compensatory mitigation and included the preparation of an Environmental Impact Statement. Furthermore, the district engineer said that there were no factors at that time that compelled him to consider suspending, modifying, or revoking the permit. However, EPA’s acting regional administrator for Region 3 wrote to the Corps on October 16, 2009, that additional modifications would need to be made if the permit were to comply with the Clean Water Act and the regulations implementing the act. EPA is preparing additional analysis of the impacts of mountaintop mining sites, including reclaimed sites, on water quality and aquatic life. EPA’s Office of Research and Development plans to release for public comment a draft assessment in early 2010 that evaluates restoration and recovery methods that mining companies use to address the ecological impacts associated with mountaintop mining and valley fills. EPA plans to prepare the assessment with advice from an expert panel chartered under the Federal Advisory Committee Act. Federal and state agencies examining the impact of mountaintop mines with valley fills have found that in streams downstream from these sites, low flows are usually increased and storm flows are sometimes increased. For example, according to the 2003 draft PEIS, streams in watersheds below valley fills tended to have greater base flows. Streams with fills were generally less likely to experience increases in peak flow than unmined areas during most storms. However, they were more likely to experience increases in peak flow during more intense rainfall events. Consequently, the draft PEIS concluded that water flows may increase below valley fills, but that the effects are site-specific. This conclusion was derived, at least in part, from studies by the U.S. Geological Survey, which compared changes in water flow in watersheds with valley fills (some of which had been reclaimed) with watersheds without valley fills. In addition, the state of West Virginia has examined the extent to which mining activities may have contributed to flooding associated with a particular storm event. On July 8, 2001, the southern portion of West Virginia experienced a major rainstorm that produced disastrous flooding. This flooding damaged or destroyed hundreds of homes and many businesses. Most of the affected counties are in the heart of West Virginia’s southern coalfields and have extensive underground and surface mining activities. Logging is also prevalent in this region. In response to public concerns, the governor created a Flood Investigation Advisory Committee and a Flood Analysis Technical Team to focus specifically on the impacts of the mining and logging industry on the July 8th flooding. The team compared two watersheds with extensive mining (and logging) activities, including valley fills, with a third watershed with no such activities. In general, according to the team, the contributions of mining and logging to increased water flow were relatively small when compared to the total stream flow volumes. It concluded, however, that mining and logging influenced the studied watersheds by increasing surface water runoff and the resulting stream flows at various evaluation points. Consequently, the flood analysis technical team recommended that, among other things, the state revise its regulations to prohibit any increase in surface water discharge over pre-mining conditions and modify certain requirements for valley fill construction. In 2003, the state received OSM’s approval to revise its mining regulations to require that permit applications contain a storm water runoff analysis and that the worst case during mining and post-mining evaluations must show no net increase in peak runoff compared with the pre-mining evaluation. According to the Secretary of the West Virginia Department of Environmental Protection, the state has also modified its valley fill construction rules to further ensure no flooding potential in times of short, intense runoff from flash storms. These modifications include engineering requirements to help ensure the stability of the valley fill. Returning spoil material to a mined out area in order to approximate the original contour and elevation of the mountain helps to reduce the amount of excess spoil that otherwise might be placed in a valley fill. As we reported in December 2009, most operators in West Virginia and Kentucky have not requested a variance from this requirement. However, according to OSM studies in 1999 and 2001 of West Virginia and Kentucky’s implementation of the approximate original contour standard, some reclaimed sites where the operator was supposed to return the land to approximate original contour differed little from sites that had been granted variances. OSM also reported in 1999 that most mountaintop removal projects in Virginia were reclaimed to a configuration closely resembling the approximate original contour, even when the state had granted a variance to the operator. Following those findings, the states issued new guidance on how to achieve approximate original contour. In 2007 and 2008, OSM reviewed the effectiveness of the states’ new contour policies and procedures; the results of those reviews were not available as of November 2009. In October 2009, OSM’s acting director instructed the field offices to assess all the states’ implementation of approximate original contour standards starting in 2010. Several federal laws may be available, under limited circumstances, to address environmental problems associated with mountaintop mines with valley fills after SMCRA or Clean Water Act financial assurances have expired, but these have rarely been needed or used, according to federal and state officials. We selected four federal laws for analysis in this regard: SMCRA; the Clean Water Act; the Comprehensive Environmental Response Compensation and Liability Act (CERCLA), also commonly known as Superfund; and the Resource Conservation and Recovery Act (RCRA). OSM and state mining agencies can use additional SMCRA provisions under two limited sets of circumstances to address environmental problems at former mine sites. First, SMCRA regulations require a mining agency to reassert jurisdiction over a mine site after a bond release if it can demonstrate that the release was based on the operator’s fraud, collusion, or misrepresentation of a material fact. According to OSM, reassertion of jurisdiction could involve reopening the permit and requiring a new bond. However, OSM and state officials reported to us that they have rarely needed to use this authority. For example, OSM told us that it had reasserted jurisdiction on one post-bond release site in West Virginia that was discharging pollution after the agency successfully argued in court that the company had misrepresented material facts when the bond was released. Second, SMCRA authorizes OSM and approved states to use funds from OSM’s Abandoned Mine Land Fund to reclaim some sites. SMCRA established the fund to reclaim certain sites mined prior to SMCRA’s passage in 1977. However, amendments to SMCRA have made these funds available for additional projects. Specifically, OSM and primacy states can use these funds to reclaim sites for which any bond or other source of funds is insufficient for reclamation when (1) mining occurred between the enactment of SMCRA and OSM approval of a state program or (2) mining occurred between the enactment of SMCRA and its amendment in 1990 and the mine operator’s surety has become insolvent. Moreover, these funds must be used to rectify situations posing extreme danger or adverse effects to public health and safety before they are used to restore environmental resources. Funds for carrying out these purposes are generated by a tax on coal production and may also be generated by penalties assessed for violations of SMCRA. OSM officials told us that each year a small amount of civil penalty money is available for any state that requests it, on a competitive basis for site reclamation and that the agency has used these funds in the past for as many as four inadequately reclaimed mine sites each year. Two provisions of the Clean Water Act authorize EPA or state water quality regulators to address or monitor water quality issues associated with former mine sites. First, the act authorizes EPA or EPA-authorized states to regulate discharges of pollutants from point sources by issuing and enforcing National Pollutant Discharge Elimination System section 402 permits that include limits on discharges of specific pollutants. According to EPA officials, a point source at a mining site could be, for example, a ditch draining a sediment pond at the base of a valley fill. Mine operators typically remove such point sources prior to receiving full bond release. However, in some circumstances, sediment ponds and associated drainage ditches may be authorized to remain on site if provisions for ongoing maintenance of the pond are made. If, after bond release, conditions at the former mine site change so that pollutants are being discharged from a point source, the party responsible for maintaining the point source—which could be the former mine operator or the landowner of the mine site—would have to obtain a section 402 permit and would be subject to applicable pollutant discharge limitations. EPA officials emphasized that a point source may remain after bond release and that the requirement to maintain a permit for any such remaining point source would be indefinite. However, state officials told us that they have rarely, if ever, needed to use this Clean Water Act authority to require a new permit for a point source at a surface coal mine. Second, the Clean Water Act requires states to identify impaired waters and to develop “total maximum daily loads” (TMDLs) for impaired waters. States may be able to use information on impaired waters to indirectly mitigate latent pollution associated with former surface coal mine sites. Specifically, if the state determines that a water body is impaired, it must eventually develop, for each pollutant causing an impairment, a TMDL—the amount of the pollutant that the water body can receive, taking into account seasonal variations and a margin of safety, and still meet the water quality standard applicable to that body of water. To implement a TMDL, states allocate pollutant loadings among specific sources, such as mines, and incorporate the loads into the state’s water quality management plans and section 402 permits. Thus, if a proposed mine would cause a body of water to exceed its TMDL for a given pollutant, the state may, among other things, impose stricter discharge limits in that site’s section 402 permit in order to achieve water quality standards. In addition, the Corps and EPA may use the information on impaired waters in considering whether a section 404 permit for a valley fill operation should be issued. For example, in raising concerns regarding the Corps’ permit for the Spruce No. 1 mine in West Virginia in 2007, EPA cited the existence of a TMDL in the mine’s watershed; EPA’s decision as to whether to veto this permit was pending as of October 2009. The states we reviewed have identified mining as a general cause of impairment for certain bodies of water, but they have not attributed such impairments to specific mine sites. For example, West Virginia’s 2006 Water Quality Assessment Report identified coal mining as a probable source of impairments for about 4,066 miles of streams in the state, but did not identify specific mining permits as a source. CERCLA, commonly known as Superfund, authorizes, but does not require, EPA to respond to the release or threatened release of hazardous substances from a former surface coal mine. Whether a particular release from a former mine constitutes a hazardous substance must be determined on a case-by-case basis. Some of the pollutants commonly associated with coal or coal mining, such as selenium, are considered hazardous substances under CERCLA. CERCLA allows the government to collect the costs of mitigating or cleaning up these substances from responsible parties. However, EPA officials said that the agency has not used CERCLA authority to respond to mine pollution released from a former surface coal mine site. EPA has noted that coal contains trace amounts of hazardous substances, but that such amounts as may be released over time from a former surface mine might not rise to the level that would trigger an EPA response. As currently implemented, the hazardous waste provisions of RCRA would not generally be available to address environmental issues at former surface coal mines because many of the wastes associated with the extraction, processing, and combustion of coal have been exempted from the definition of hazardous waste. However, concern over one particular coal by-product, coal combustion residue, may lead to regulation of the material as a hazardous waste in the future. Coal combustion residue—the material that is left once coal has been burned, as in a power plant—is sometimes placed on surface mines to abate acid mine drainage. According to OSM, the residue may also be used to enhance soil, seal and encapsulate material, and backfill mine sites. If coal combustion residue were deemed a hazardous waste, surface mines receiving such materials might be subjected to RCRA’s hazardous waste provisions and could be forced to address releases of hazardous wastes. Currently, EPA is developing regulations on managing coal combustion residue, including those managed in surface impoundments, such as one that failed in Tennessee in December 2008. EPA is considering a number of approaches for regulating coal combustion residue, including using the solid waste provisions of RCRA, or a combination of the solid and hazardous waste provisions of RCRA. We provided a draft of this report to the Department of the Interior, the Department of Defense, and the Environmental Protection Agency for review and comment. We also provided a draft of this report to the Kentucky Department for Natural Resources; the Virginia Department of Mines, Minerals and Energy; and the West Virginia Department of Environmental Protection. The three federal agencies generally agreed with our findings, while the three state agencies were critical of what they perceived to be the message of the report. The Department of the Interior said that it believed the report is an informative and fair characterization of the federal and state program requirements under SMCRA pertaining to financial assurances in the four states we reviewed. The Department of Defense said that, in general, it believed the report is informative and provides a good discussion of the issues involved in financial assurances for surface coal mining in Appalachia. The Environmental Protection Agency noted that the report provides a factual presentation of issues associated with the review and regulation of surface coal mining practices. The agency also noted that the data presented in this and a December 2009 GAO report provide helpful context for federal and state agencies as they continue to work together to address both the near- and long-term consequences of surface coal mining activities on the environment, water quality, and Appalachian coalfield communities. The three state agencies’ comments were critical of the draft report. For example, Kentucky commented that it believed the report is overly broad in its generalized statements, that terms and phrases are used interchangeably so as to confuse the issues, and that the report is written in a manner to misrepresent and sensationalize the issues. We do not agree that the report misrepresents or sensationalizes the issues, and have reviewed our use of terms—such as mountaintop mining, mountaintop removal mining, valley fills, and hollow fills—to ensure that they are used consistently and appropriately throughout the report. Virginia commented that the report appears to be based on an assumption that there are post-bond release pollution discharges below valley fills, and that it was concerned with our use of an EPA study (by Pond, Passmore, et al.) to support the point that such discharges may occur. The state also noted that pollution problems that may occur are likely to be site-specific. We disagree with Virginia’s characterization of our report because we did not assume that there are post-bond release pollution discharges below valley fills. In fact, our report notes that there is little monitoring of sites after bond release, thereby making it difficult to assess post-bond release conditions. Nevertheless, we recognize in the report that there is some evidence, including in the EPA study, that such problems may occur. We agree that problems, if they occur, are likely to be site-specific. West Virginia noted that all coal mines—not just Appalachian mines with valley fills—are subject to SMCRA and the Clean Water Act. The state also commented that the report seemed to imply that there is a bonding or financial assurance problem in the four Appalachian states we reviewed and that surface coal mines with valley fills are the only mines that have the potential to cause environmental harm. West Virginia also commented that the report implied that the monitoring period before bond release should be longer. While we recognize that other types of coal mining are subject to these laws and may affect the environment, our report focused on surface coal mining with valley fills. The four states we reviewed have more than 98 percent of the recently approved valley fills across the country. In addition, our report contained no conclusions about the adequacy of the bonding programs in the four states or the length of the monitoring period; instead, we attempted to present information on the requirements of the relevant laws. Although West Virginia commented that the report did not give full credit to the state for improvements it has made in reforestation, approximate original contour, and surface water runoff practices, it did not provide any additional information to support these statements. The report does provide information on actions taken by the state in these areas. We present the agencies’ letters containing their general comments, along with our responses to them, as necessary, in appendixes III through VIII. The agencies, with the exception of EPA, also provided technical comments that we incorporated into the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretaries of the Interior and Defense, and the Administrator of the Environmental Protection Agency. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. This appendix details the methods we used to examine (1) the approaches the Office of Surface Mining (OSM), the states we reviewed, and the Army Corps of Engineers (Corps) have taken to obtain financial assurances for surface coal mines with valley fills; (2) the extent to which federal and state agencies monitor and evaluate these mines after reclamation and mitigation are complete; and (3) the federal laws agencies may use, and have used, to address any latent environmental problems associated with these mines that may occur after Surface Mining Control and Reclamation Act (SMCRA) or Clean Water Act financial assurances have expired. This report focused on the four Appalachian states of Kentucky, Tennessee, Virginia, and West Virginia because these areas account for more than 83 percent of the surface coal production in Appalachia and more than 98 percent of recently approved valley fills across the country. The data on coal production is from the Energy Information Administration and can be found at http://www.eis.doe.gov/cneaf/coal/page/arc/table1.html. The data on valley fills are based on permits approved from October 1, 2001, through June 30, 2005, as reported in Department of Interior, Office of Surface Mining Reclamation and Enforcement, Environmental Impact Statement: Proposed Revisions to the Permanent Program Regulations Implementing the Surface Mining Control and Reclamation Act of 1977 Concerning the Creation and Disposal of Excess Spoil and Coal Mine Waste and Stream Buffer Zones, OSM-EIS-34 (2008). We also gathered background data on valley fills approved in the four states from January 1, 2000, through various dates in mid-2008 to mid-2009. The data from Kentucky and West Virginia are drawn from GAO-10-21, Surface Coal Mining: Characteristics of Mining in Mountainous Areas of Kentucky and West Virginia. Neither Virginia nor Tennessee maintained valley fill data in electronic form. State officials provided fill data for Virginia and OSM officials provided fill data for Tennessee by reviewing hardcopy permits issued since 2000. We interviewed state and OSM officials about the reliability of the data they provided and compared their results to OSM’s 2008 environmental impact statement on excess spoil and stream buffer zones. We determined the data were sufficiently reliable for our purposes. To address each of the objectives, we obtained documents from and interviewed officials at several federal and state agencies. These included officials in the Department of the Interior’s OSM in (1) headquarters; (2) Appalachian Regional Office in Pittsburgh, Pennsylvania; and (3) field offices in Lexington, Kentucky; Knoxville, Tennessee; Charleston, West Virginia; and Big Stone Gap, Virginia. The OSM field office in Knoxville manages the mining program in Tennessee. We also interviewed and obtained information from officials in the Environmental Protection Agency (EPA) headquarters and regional offices in Philadelphia, Pennsylvania (Region 3) and Atlanta, Georgia (Region 4); officials in the U.S. Geological Survey; and officials in the Corps of Engineers headquarters and district offices in Louisville, Kentucky; Pittsburgh, Pennsylvania; Nashville, Tennessee; Norfolk, Virginia, and Huntington, West Virginia. Those five district offices are responsible for issuing and enforcing the Clean Water Act section 404 permits to surface mines in the states of Kentucky, Tennessee, Virginia, and West Virginia. Moreover, we interviewed and obtained information from the following state agencies in the four states we reviewed: the Kentucky Department for Natural Resources; Kentucky Division of Water; Tennessee Department of Environment and Conservation; Virginia Department of Mines, Minerals and Energy; Virginia Department of Environmental Quality; and West Virginia Department of Environmental Protection. To describe the approaches OSM, the states, and the Corps have taken to obtain financial assurances for surface coal mines with valley fills, we reviewed relevant sections of SMCRA and OSM’s implementing regulations and policy guidance to identify national requirements for financial assurances associated with surface mining reclamation. We also reviewed state mining laws in the three states that have primacy for administering SMCRA—Kentucky, Virginia, and West Virginia—as well as those states’ mining agency implementing regulations and policy guidance, to identify the states’ approaches to financial assurances for surface mining reclamation established in accordance with the federal standards. We also spoke with officials from OSM headquarters, the Appalachian Regional Office, and field offices, as well as officials from the state mining agencies in Kentucky, Virginia, and West Virginia. We spoke with officials from the OSM field office in Knoxville to discuss financial assurances in Tennessee because these officials manage the mining program in that state. We also reviewed section 404 of the Clean Water Act and the Corps’ implementing regulations and policy guidance to identify requirements and policy for financial assurances associated with compensatory mitigation projects. In addition, we contacted Corps officials in the headquarters and the five district offices to identify the extent to which the Corps has included financial assurance requirements in permits it has issued to surface mines for valley fills. We also interviewed officials from the EPA to identify their role and responsibility for overseeing section 404 permits. To examine the extent to which federal and state agencies monitor and evaluate surface coal mines with valley fills after reclamation and mitigation are complete, we obtained information from and interviewed officials in OSM’s Appalachian Regional Office and field offices, as well as state officials at the mining agencies in Kentucky, Virginia, and West Virginia to identify any routine monitoring and “one-time” evaluations that these agencies have done of mine sites to assess the long-term environmental impact of the reclamation after the SMCRA reclamation bonds have been released. We also interviewed and obtained information from officials in the Corps’ five district offices to identify any routine monitoring the Corps has done of mitigation projects after determining that operators have completed their mitigation obligations or any specific studies of completed surface coal mine mitigation projects. In addition, we interviewed and obtained information from officials in EPA’s Office of Water in headquarters and regions 3 and 4; the U.S. Geological Survey; and state water quality regulators in Kentucky, Tennessee, Virginia, and West Virginia regarding any monitoring or evaluation of the long-term environmental impact of former surface mines with valley fills. Among the 11 federal and state agencies that we interviewed, none replied that they had done routine monitoring of this nature, and most replied that they had not done any “one-time” studies. The few agencies that replied they had done one-time studies referred us primarily to studies completed as part of the 2003 draft multiagency programmatic environmental impact statement (PEIS). OSM’s 2008 final environmental impact statement on proposed regulations for excess spoil management also generally cited the 2003 draft PEIS as a source of information on the environmental impacts of valley fills. The federal and state agencies that collaborated on the draft PEIS conducted or funded more than 30 studies of the impacts of mountaintop mining and associated valley fills and used them as support for evaluating the impacts of various programmatic alternatives. With these facts in mind, we relied heavily on the conclusions that the authors of the draft PEIS drew concerning a number of environmental impacts, including reforestation, water quality and impacts on aquatic organisms, and water flow. We also cited more recent studies provided to us by agency officials, such as a 2008 study by EPA Region 3 on water quality and aquatic organisms near valley fills. Also, during the course of our review, we learned from OSM officials about OSM’s evaluation of mine operators’ compliance with approximate original contour policies in Kentucky, Virginia, and West Virginia. We reported the results of those evaluations because of their relevance to the construction of valley fills. To examine the federal laws agencies may use, and have used, to address any latent environmental problems associated with surface mines with valley fills that may occur after SMCRA or Clean Water Act financial assurances have expired, we analyzed SMCRA and the Clean Water Act and identified provisions that provide mining agencies and water quality regulators authority to address environmental problems on a former mine site after SCMRA bonds have been released. We also interviewed officials from OSM, state mining agencies, and state water quality regulators in the four states we reviewed to learn the extent to which these authorities have been used in the past to address any environmental problems that may have occurred on or caused by a former mine site with valley fills. In addition, we analyzed two other federal environmental laws—the Comprehensive Environmental Response Compensation and Liability Act (CERCLA, also known as Superfund) and the Resource Conservation and Recovery Act (RCRA)—to identify provisions that may authorize or require EPA to address environmental problems that may occur on or be caused by a former surface mine after bonds have been released. We interviewed officials from EPA’s Office of Solid Waste and Emergency Response to learn if CERCLA had been used in the past in that context. We also we reviewed an EPA regulatory determination published in 2000 on whether regulation of coal combustion residue was warranted under the hazardous substance provisions of RCRA. We conducted this engagement from October 2008 to January 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this report. The Surface Mining Control and Reclamation Act (SMCRA) requires that mined land be reclaimed consistent with environmental performance standards, including making the land available for post-mining uses. The SMCRA permit process requires operators to submit detailed plans describing the extent of the proposed mining operations and how reclamation will be achieved. In reclaiming the land, operators must comply with regulatory standards that govern, among other things, the final contour of the reclaimed area, the revegetation of reclaimed mine sites, and the quality of water leaving the mine site. This appendix describes these key reclamation standards. In general, mountaintop mine operators are required to return mine sites to their approximate original contour (AOC) unless the operator receives a variance from the regulatory authority. This means that the surface configuration achieved by backfilling and grading of the mined area must closely resemble the general surface configuration of the land prior to mining and blend into and complement the drainage pattern of the surrounding terrain, with all highwalls and spoil piles eliminated. The Office of Surface Mining (OSM) and the states may grant a variance from the requirement to return the site to AOC—meaning that the land would be left relatively flat—in certain circumstances, including those in which the operator can demonstrate that the site will be suitable for certain post-mining land uses. According to OSM, these variances present an opportunity to create relatively flat, flood-free land capable of supporting economic development. In our recent report on trends in mountaintop mining, we reported that variances from the AOC requirement have been relatively rare in Kentucky and West Virginia. A purpose of SMCRA is to assure that adequate procedures are undertaken to reclaim surface areas as contemporaneously as possible with the surface coal mining operations. OSM and the states require that backfilling and grading begin within a certain number of days after coal removal in a particular area. OSM and state law and regulations for mine reclamation also address how sites are to be revegetated after they have been backfilled and graded. To obtain bond release under SMCRA, mine operators must show successful revegetation 5 full years after the last year of augmented seeding, fertilizing, irrigation, or other work. What is planted depends on the approved post-mining land use, such as forestry or hayland and pasture. State regulations set forth different requirements for factors including plant species, variety, density, and coverage for different post-mining land uses. The states have standards for the extent of vegetation that must be initially planted and how much must survive in order to receive bond release. For example, West Virginia’s regulations call for mines sites with a forest land post-mining land use to be planted with at least 500 woody plants per acre. This is to include at least 350 trees and 150 shrubs. The state specifies that a least 5 species of trees be used, including at least 3 higher value hardwoods such as oak, ash, or maple. The state also specifies a minimum success standard of at least 450 trees and shrubs per acre and a 70-percent ground cover. SMCRA requires that mine operators’ bonds be of an amount sufficient to ensure the completion of the site’s reclamation plan by the regulatory authority, which includes compliance with water quality standards. These standards include those established by EPA or the states under the Clean Water Act and referenced by SMCRA. Each reclamation plan is to include a detailed description of the measures to be taken during the mining and reclamation process to ensure the protection of the quality of surface and ground water systems, both on- and off-site, from adverse effects of the mining and reclamation process. OSM has stated that a reclamation bond may not be released where active or passive water treatment systems are being used to achieve compliance with applicable standards. SMCRA regulations contain specific water protection requirements. The regulations include requirements that all surface mining and reclamation activities be conducted to minimize disturbance of the hydrologic balance within the permit and adjacent areas and to prevent material damage to the hydrologic balance outside the permit area. The hydrologic balance requirements include standards for water quality and effluent limitations, sediment control, siltation and discharge structures, and activities in or adjacent to perennial or intermittent streams. Permit applicants must submit a probable hydrologic consequences determination with their permit application as well as a hydrologic reclamation plan indicating how any probable hydrologic consequences will be prevented or remediated, including how the general hydrologic balance requirements will be met. In addition, the regulations that address backfilling and grading require operators to cover acid- or toxic-forming materials with a minimum of 4 feet of nontoxic material, or treat the material to neutralize its toxicity in order to prevent water pollution. With regard to excess spoil used as fill material, the regulations require that leachate and surface runoff from the fill will not degrade surface or ground waters or exceed effluent limitations set for iron, manganese, total suspended solids, and pH. The regulations also require that slopes be protected to minimize surface erosion at the site and that the fill be designed using recognized professional standards, certified by a registered professional engineer, and approved by the regulatory authority. The following are GAO’s comments on the letter dated December 15, 2009, from the Assistant Secretary of the Army, Civil Works. 1. While we appreciate the Army Corps of Engineers’ (Corps) sensitivity to the litigation associated with the Spruce mine, we do not feel that any change to our report is warranted. We do not specifically discuss the litigation, which was brought by environmental groups against the Corps, but rather an ancillary conflict between the Corps and the Environmental Protection Agency (EPA). Our brief discussion of the matter presents both sides of the conflict between EPA and the Corps using the agencies’ own words sourced wholly from publicly available documents and refrains from making any conclusions as to the merits of the case. 2. We disagree with the Corps’ comment that a discussion of projects subject to the enhanced coordination procedure and the Spruce mine are irrelevant to the objectives of our study. Both of these points are relevant to our second objective, which asks us to describe the extent to which federal and state agencies monitor and evaluate the impacts of surface coal mining activities. Both the enhanced coordination procedure and the Spruce mine case provide examples of how federal regulators are using studies that we discuss in the report. Therefore, we did not revise the report in response to this comment. The following are GAO’s comments on the letter dated December 17, 2009, from the Commissioner, Department for Natural Resources. 1. We do not agree that the report misrepresents or sensationalizes the issues; however, we do agree that it is important to be accurate and use correct terminology. Throughout the report we have strived to be accurate and have been careful to consistently and accurately use terms and phrases that are commonly used in regulation or the coal mining literature. In its comment, the state did not provide specific examples of what it believes are inaccurate facts or inappropriate terms. However, subsequent comments from the state referred to our use of the terms mountaintop mining, mountaintop removal mining, valley fills, and hollow fills. We have reviewed our use of these terms throughout the report to ensure that they are used consistently and appropriately. 2. The state is referring to our practice of holding “exit conferences” near the end of our review. Our policy is to provide the agencies with relevant program responsibilities—typically federal agencies but in this case a state agency—with excerpted material from the draft report. We call this document a “statement of facts.” The purpose of the exit conference is to obtain the agency’s input regarding the accuracy of the facts presented. The purpose is not to obtain comments on the entire draft report; that step comes later in the process. Therefore, the statement of facts that we sent to Kentucky contained information describing laws, policies, and conditions that pertained directly to the state. We understand that agencies are likely to have additional comments on the full draft report—as Kentucky did in this instance—but also believe that our process of holding exit conferences to discuss the statement of facts followed by a request for formal comments on the full report is a transparent one. 3. We understand that Kentucky’s regulations define both hollow fills and valley fills, but not all states make this distinction in practice. Federal and state regulations identify different types of fills, including valley fills, head-of-hollow fills, and durable rock fills. These definitions differ in their characteristics, including placement, slope, and material composition. For ease of reading, we refer to all types of fills as valley fills in this report. The term valley fill is not meant to indicate the size of a particular fill or the type of stream affected—ephemeral, intermittent, or perennial. 4. We agree with the state’s specific comment and have clarified the report accordingly. In our discussion of post-mining land use requirements, we are referring specifically to mountaintop removal, one type of mountaintop mining. For further clarity, we have added a footnote that compares the requirements for mountaintop removal to those for steep slope mining, another kind of mountaintop mining. Throughout the rest of the report, however, we continue to use the term mountaintop mining to refer generally to all types of coal mining in mountainous areas. This usage is consistent with our previous report mentioned by the state (GAO-10-21) that was also recently reviewed by state officials. This usage is also consistent with the Environmental Protection Agency’s (EPA) 2003 draft Programmatic Environmental Impact Statement on Mountaintop Mining/Valley Fills in Appalachia. 5. We agree that not all fills approved are ultimately constructed, and make that point in the report. However, we do not believe that our report overstates the miles of buried streams and did not modify the report in response to this comment. The sources for the data that we include in the report are the 2003 draft Programmatic Environmental Impact Statement on mountaintop mining and valley fills and the Office of Surface Mining’s 2008 final environmental impact statement on excess spoil and the stream buffer zone. For example, the 2003 draft statement reported that 724 miles of streams were “directly impacted by valley fills (i.e., covered by fill).” 6. We disagree with the comment. While we understand that some state regulations require termination of jurisdiction at bond release, the federal regulations only state that the relevant regulatory authority may terminate its jurisdiction under the Surface Mining Control and Reclamation Act (SMCRA) at bond release. Therefore, we have not revised the report in response to the comment. 7. We disagree with the comment. SMCRA does not specifically prohibit applicants from obtaining future SMCRA permits if they have previous bond forfeitures. SMCRA generally prohibits applicants from obtaining future permits if they have unabated violations. However, in response to the comment, we have added detail on the state regulations, which do specifically note that bond forfeiture based on violations that are not subsequently corrected disqualify operators from obtaining future permits. 8. We did not modify the report in response to this comment because the background section of the report does include data on the differences in recent surface coal mine production in the four states. Specifically, the report notes that Kentucky produced about 51 million tons while Tennessee produced less than 2 million tons in 2008. 9. This paragraph summarizes the section that follows, and we do not agree that an editorial change is needed. We believe that our description of the Army Corps of Engineers’ practices is accurate on the basis of information obtained from that agency. 10. The citations on which the findings are based are provided later in the body of the report. We did not add citations to this summary paragraph. However, we have deleted the word “tentative” from our discussion of impacts on water flows. We believe that the documents we cite, along with comments we received from the Department of the Interior, support our characterization in the final draft of the report. 11. We do not agree that the EPA statements are largely editorial and made no change to them. We believe that the EPA and U.S. Geological Survey statements on inadequate monitoring are as germane to the purpose of the report as the statements from state agency officials, who believe monitoring is adequate. 12. We have clarified the footnote to indicate that the mix of amphibian and reptile populations was affected by the presence of mining. 13. We have not modified our characterization of the West Virginia Flood Advisory Technical Task Force report because we believe it is an accurate summary of the task force report. However, we have modified the report to include Kentucky’s comment on its regulations related to flood analysis and avoidance. 14. We have added this information to footnote 57. The following are GAO’s comments on the letter dated December 22, 2009, from the Director, Department of Mines, Minerals and Energy. 1. We do not assume that post-bond release pollution discharges occur below valley fills. In addition to the Pond-Passmore study, our draft report cited the 2003 draft Programmatic Environmental Impact Statement, which concluded that streams below mountaintop mines with valley fills were characterized by contamination. We agree that the contamination may not necessarily have been post-bond release, and we agree that contamination problems are likely to be site specific, when they occur. We did not revise the report in response to this comment. 2. The focus of this report was surface coal mining and not all activities that may affect water quality. Therefore, while we agree that other land disturbing activities may affect water quality in watersheds with mining, we have not included a discussion of those activities. 3. Points relating to hydrologic balance, such as effluent limitations, are discussed throughout the report in general terms and more specifically in Appendix II. We have added more detail on hydrologic balance requirements to Appendix II in response to this comment. This material is included in the appendix because, while we understand that adherence to regulations designed to protect the hydrologic balance of the mine site during the mining operation may help to minimize water quality issues after bond release, we were asked to discuss mechanisms available to address environmental problems after bond release, when the Surface Mining Control and Reclamation Act’s hydrologic balance requirements would no longer apply. 4. We have not modified the report in response to the state’s comment because we did not analyze the use of passive wetlands, or other methods, for treating water after bond release. 5. The state is correct that the 2003 draft programmatic environmental impact statement was finalized in October 2005, and we have revised footnote 2 to make that clear. The final version of the statement incorporated the 2003 draft statement by reference. However, the 2005 final statement did not contain all of the material found in the draft statement. For example, studies of the impacts of mountaintop mining were in the appendixes of the 2003 draft, but not the 2005 final statement. Therefore, we believe that it is preferable to refer the readers of our report to the 2003 draft statement instead of the 2005 final statement. The following are GAO’s comments on the letter dated December 22, 2009, from the Deputy Director, Division of Mining and Reclamation. 1. We agree that mining nationwide has similar potential to impact the environment. We also agree that the Surface Mining Control and Reclamation Act requires financial assurances in all states. However, we were asked to examine financial assurances and activities related to monitoring at coal mines with valley fills. The four states we reviewed have the vast majority of coal mines with valley fills. Therefore, we did not revise the report in response to this comment. 2. Our report notes that the state has made changes to its policies and practices related to reforestation, approximate original contour, and surface water runoff. We did not revise the report in response to this comment. 3. It is correct that we are not making any recommendations regarding the length of the monitoring period before bond release. Our report notes that most, but not all, agencies we contacted, believe that monitoring is adequate. At the same time, there is evidence from some monitoring that environmental problems may occur after bonds have been released. We did not revise the report in response to this comment. In addition to the contact named above, Robin Nazzaro (Director), Andrea Wamstad Brown (Assistant Director), Sherry McDonald (Assistant Director); Ross Campbell, Antoinette Capaccio, Brian Friedman, Brandon Haller, Carol Hernstadt Shulman, and Desiree Thorp made key contributions to this report. Josey Ballenger, Charlie Egan, Carol Kolarik, and Rebecca Shea also contributed to this report. | Surface mining for coal in Appalachia has generated opposition because rock and dirt from mountaintops is often removed and placed in nearby valleys and streams. The Office of Surface Mining Reclamation and Enforcement (OSM) in the Department of the Interior and states with approved programs regulate these mines under the Surface Mining Control and Reclamation Act (SMCRA). The Army Corps of Engineers (Corps), the Environmental Protection Agency (EPA), and states also regulate different aspects of coal mining, including the filling of valley streams, under the Clean Water Act. Under SMCRA, mine operators must provide financial assurances sufficient to allow mines to be reclaimed. Under the Clean Water Act, the Corps may require financial assurances that the impact of mines on streams can be mitigated. GAO was asked to examine (1) the approaches OSM, the states, and the Corps have taken to obtain financial assurances for surface coal mines with valley fills; (2) federal and state agencies' monitoring of these mines after reclamation and mitigation are complete; and (3) the federal laws agencies may use, and have used, to address latent environmental problems. GAO gathered information from state and federal agencies in Kentucky, Tennessee, Virginia, and West Virginia about their financial assurances practices, long-term monitoring, and use of federal laws to address environmental impacts at former mine sites. This report makes no recommendations. OSM, the states, and the Corps use different approaches to financial assurances for reclamation and mitigation. Under SMCRA, states have flexibility to require mine operators to provide a bond for the full cost of reclamation or participate in an alternative bonding system such as a bond pool, which may combine bonds, taxes on coal production, and other sources of funding. West Virginia relies exclusively on an alternative bonding system, while Tennessee exclusively uses a full-cost bonding system. The other two states, Virginia and Kentucky, rely on a combination of full-cost bonds and an alternative bonding system. Under the Clean Water Act, the Corps has discretion to require that mine operators provide assurances that funds will be available to mitigate the effects of burying streams with valley fills but it has not done so in the four states we reviewed. Instead, the Corps has relied on other mechanisms to ensure that mitigation will be completed satisfactorily, according to Corps officials. For example, some Corps officials said they rely on SMCRA financial assurances to ensure required mitigation. OSM, EPA, the Corps, and the four states' mining and environmental agencies are not required to monitor former mountaintop mines with valley fills for long-term environmental degradation after reclamation and mitigation are complete and financial assurances have been released. However, several of them, along with the U.S. Geological Survey, have conducted or funded analyses of conditions near reclaimed mine sites with valley fills that have shown environmental impacts. Specifically, analyses have shown that (1) reforestation efforts at some reclaimed surface coal mine sites needed improvement; (2) surface coal mine sites have contaminated streams and harmed aquatic organisms; (3) valley fills may affect water flow; and (4) mine operators have not always returned mine sites to their approximate original contour when required to do so under SMCRA. Federal and state agencies have taken some actions to respond to these findings, including adopting new guidelines for reforestation practices. Several federal laws may be available under limited circumstances to address long-term environmental problems at former mine sites. These laws include SMCRA; the Clean Water Act; the Comprehensive Environmental Response Compensation and Liability Act (CERCLA), also commonly known as Superfund; and the Resource Conservation and Recovery Act. For example, the Clean Water Act authorizes EPA or a state to require a permit if discharges are detected from a former surface mine, and CERCLA may authorize EPA to respond to certain pollution from former surface mines. According to the agencies, they have rarely or never needed to use these authorities. We provided a draft of this report to OSM, the Corps, EPA, Kentucky, Virginia, and West Virginia for review and comment. The federal agencies generally agreed with the report, while the states were critical of what they perceived to be the message of the report. |
To help protect against threats to federal systems, FISMA sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. This framework creates a cycle of risk management activities necessary for an effective security program. It is also intended to provide a mechanism for improved oversight of federal agency information security programs. To ensure the implementation of this framework, FISMA assigns specific responsibilities to agencies, their inspectors general, OMB, and NIST. FISMA requires each agency to develop, document, and implement an information security program that includes the following components: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or group of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies are to report annually to OMB, certain congressional committees, and the Comptroller General on the adequacy and effectiveness of information security policies, procedures, and practices, and compliance with FISMA. The act also requires each agency inspector general, or other independent auditor, to annually evaluate and report on the information security program and practices of the agency. OMB’s responsibilities include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies (except with regard to national security systems). It is also responsible for ensuring the operation of a federal information security incident center. The required functions of this center are performed by the DHS United States Computer Emergency Readiness Team (US-CERT), which was established to aggregate and disseminate cybersecurity information to improve warning and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. OMB is also responsible for reviewing, at least annually, and approving or disapproving agency information security programs. Since it began issuing guidance to agencies in 2003, OMB has instructed agency chief information officers and inspectors general to report on a variety of metrics in order to satisfy reporting requirements established by FISMA. Over time, these metrics have evolved to include administration priorities and baseline metrics meant to allow for measurement of agency progress in implementing information security-related priorities and controls. OMB requires agencies and inspectors general to use an interactive data collection tool called CyberScopemetrics. The metrics are used by OMB to summarize agencies’ progress in meeting FISMA requirements and report this progress to Congress in an annual report as required by FISMA. NIST’s responsibilities under FISMA include the development of security standards and guidelines for agencies that include standards for categorizing information and information systems according to ranges of risk levels, minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system. (See app. II for additional information on agency responsibilities under FISMA.) In the 11 years since FISMA was enacted into law, executive branch oversight of agency information security has changed. As part of its FISMA oversight responsibilities, OMB has issued annual instructions for agencies and inspectors general to meet FISMA reporting requirements. However, in July 2010, the Director of OMB and the White House Cybersecurity Coordinator issued a joint memorandumwas to exercise primary responsibility within the executive branch for the operational aspects of cybersecurity for federal information systems that fall within the scope of FISMA. The memo stated that DHS activities would include five specific responsibilities of OMB under FISMA: overseeing implementation of and reporting on government cybersecurity policies and guidance; overseeing and assisting government efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing agencies’ compliance with FISMA; overseeing agencies’ cybersecurity operations and incident response; annually reviewing agencies’ cybersecurity programs. The OMB memo also stated that in carrying out these responsibilities, DHS is to be subject to general OMB oversight in accordance with the provisions of FISMA. In addition, the memo stated that the Cybersecurity Coordinator would lead the interagency process for cybersecurity strategy and policy development. Subsequent to the issuance of this memo, both OMB and DHS began issuing annual reporting instructions to agenciesand DHS began issuing reporting metrics to agencies and inspectors general instead of OMB. Within DHS, the Federal Network Resilience division’s Cybersecurity Performance Management Branch is responsible for (1) developing and disseminating FISMA reporting metrics, (2) managing the CyberScope web-based application, and (3) collecting and reviewing federal agencies’ cybersecurity data submissions and monthly data feeds to CyberScope. In addition, the Cybersecurity Assurance Program Branch is responsible for conducting cybersecurity reviews and assessments at federal agencies to evaluate the effectiveness of agencies’ information security programs. In fiscal year 2012, agencies and their inspectors general reported mixed progress from fiscal year 2011 in implementing many of the requirements for establishing an entity-wide information security program. According to inspectors general reports, agencies (1) improved in establishing a program for managing information security risk; (2) generally documented information security program policies and procedures; (3) generally implemented certain elements of security planning; (4) declined in providing security awareness training but improved in providing specialized training; (5) generally established test and evaluation programs and are working toward establishing continuous monitoring programs; (6) declined in implementing elements of a remediation program; (7) generally established programs for detecting, responding to, and reporting security incidents; and (8) declined in implementing elements of continuity of operations programs. Notwithstanding the mixed progress made, GAO and inspectors general continue to identify weaknesses in agencies’ information security programs and make recommendations to mitigate the weaknesses identified. In addition, OMB and DHS continued to develop reporting metrics and assist agencies in improving their information security programs; however, the metrics do not evaluate all FISMA requirements, focused mainly on compliance rather than effectiveness of controls, and in many cases did not identify specific performance targets for determining levels of implementation. Finally, inspectors general conducted the required independent evaluations of agency information security programs, and NIST continued to issue guidance to assist agencies with implementing controls to improve their information security posture. FISMA requires that the head of each agency provide information security protections commensurate with the risk resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of agency information and information systems. FISMA specifically requires agencies to assess this risk in order to determine the appropriate controls needed to remediate or mitigate the risk to the agency. To assist agencies in identifying risks, NIST has issued risk management and assessment guides for organizations and information systems. According to NIST’s Guide for Applying the Risk Management Framework to Federal Information Systems, risk management is addressed at the organization level, the mission and business process level, and the information system level. Risks are addressed from an organizational perspective with the development of, among other things, risk management policies, procedures, and strategy. The risk decisions made at the organizational level guide the entire risk management program. Agencies made progress in implementing programs for managing information security risk in fiscal year 2012. According to inspectors general reports, an increasing number of agencies implemented a program for managing information security risk that is consistent with FISMA requirements and its implementing guidance. Specifically, 18 of 24 agencies in fiscal year 2012 implemented such a program compared to 8 of 24 in 2011. In addition, an increasing number of agencies documented policies, procedures, and strategies—three key components for assessing and managing risk. Figure 1 shows agency progress in documenting and implementing a risk management program and key elements of that program in fiscal years 2011 and 2012. Although an increasing number of agencies have implemented a risk management program and documented policies, procedures, and strategies, agency inspectors general identified areas for improvement in their agency’s risk assessment and management activities. For example, in fiscal year 2012, 20 of 24 agencies had weaknesses in periodically assessing and validating risks. To illustrate, 1 agency did not conduct a risk assessment to ensure that the impact of mobile devices and their associated vulnerabilities were adequately addressed. Another agency’s risk assessments were not properly updated, as they included references to inaccurate system environment information. Another agency was missing key elements in its approach to managing risk at an agency-wide level, including conducting an agency-wide risk assessment and communicating risks to system owners. In addition, fewer agencies addressed risk from a mission or business perspective in fiscal year 2012 than in fiscal year 2011, declining from 15 to 14 agencies. Risk management is at the center of an effective information security program, and without an effective risk management program agencies may not be fully aware of the risks to essential computing resources, and may not be able to make informed decisions about needed security protections. FISMA requires agencies to develop, document, and implement policies and procedures that are based on risk assessments; cost-effectively reduce information security risks to an acceptable level; ensure that information security is addressed throughout the life cycle of each agency information system; and ensure compliance with FISMA requirements, OMB policies and procedures, minimally acceptable system configuration requirements, and any other applicable requirements. In fiscal years 2011 and 2012, OMB asked inspectors general to report on whether agencies had documented policies and procedures for 11 information system control categories. These controls are intended to (1) manage risks to organizational operations, assets, and individuals resulting from the operation of information systems; (2) provide reasonable assurance that changes to information system resources are authorized and systems are configured and operating securely and as intended; (3) rapidly detect incidents, minimize loss and destruction, mitigate exploited weaknesses, and restore IT services; (4) inform agency personnel of the information security risks associated with their activities and inform agency personnel of their responsibilities in complying with agency policies and procedures designed to reduce these risks; (5) ensure individuals with significant security responsibilities understand their responsibilities in securing information systems; (6) assist agencies in identifying, assessing, prioritizing, and monitoring the progress of corrective efforts for security weaknesses found in programs and systems; (7) deter, detect, and defend against unauthorized network access; (8) ensure access rights are only given to the intended individuals or processes; (9) maintain a current security status for one or more information systems or for all information systems on which the organization’s mission depends; (10) ensure agencies are adequately prepared to cope with the loss of operational capabilities due to a service disruption such as an act of nature, fire, accident, or sabotage; and (11) assist agencies in determining whether contractor-operated systems have adequate security. Inspectors general reported that most agencies documented policies and procedures that were consistent with federal guidelines and requirements; however, several agencies had not fully documented policies and procedures for individual control categories. In addition, the number of agencies documenting policies and procedures increased for some control categories, but declined for others. For example, an increasing number of agencies documented policies and procedures for risk management, configuration management, and continuous monitoring, but the number of agencies documenting policies and procedures for security awareness and remote access declined. According to OMB, the decline in the number of agencies documenting certain policies and procedures could be due to agencies’ not updating their policies and procedures after new federal requirements are established or new technologies are deployed. Table 1 provides a summary of the number of agencies that fully documented information security program policies and procedures for fiscal years 2011 and 2012. Although most agencies documented security policies and procedures, they often did not fully or consistently implement them. To illustrate, most major federal agencies had weaknesses in the following information system controls: Access controls: In fiscal year 2012, almost all (23 of 24) of the major federal agencies had weaknesses in the controls that are intended to limit or detect access to computer resources (data, programs, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure. For example, 21 of 24 agencies had weaknesses in their ability to appropriately identify and authenticate system users. To illustrate, although agencies are required to uniquely identify users on their systems, some users shared accounts at 1 agency, and administrators shared accounts for multiple systems at another agency, making it difficult for the agencies to account for user and administrator activity on their systems. Other agencies had weak password controls, including systems with passwords that had not been changed from the easily guessable default passwords supplied by the vendor. In addition, 20 of 24 agencies had weaknesses in the process used to grant or restrict user access to information technology resources. For example, 1 agency had not disabled 363 user accounts for individuals who were no longer employed by the agency, despite a department policy of disabling these accounts within 48 hours of an employee’s departure. Further, 18 of 24 agencies had weaknesses in the protection of information system boundaries. For example, although 1 agency had established a program for remote access to agency systems, it had not ensured that authentication mechanisms for remote access meet NIST guidelines for remote authentication. Lastly, 11 of 24 agencies had weaknesses in their ability to restrict physical access or harm to computer resources and protect them from unintentional loss or impairment. For example, 1 agency had not always deactivated physical access cards for contractors that no longer worked at the agency and had provided physical access to employees that were not approved for such access. Configuration management: In fiscal year 2012, all 24 agencies had weaknesses in the controls that are intended to prevent unauthorized changes to information system resources (for example, software programs and hardware configurations) and provide reasonable assurance that systems are configured and operating securely and as intended. For example, 20 of 24 agencies had weaknesses in processes for updating software to protect against known vulnerabilities. One agency had not installed critical updates in a timely manner for 14 of 15 systems residing on one if its networks reviewed by the agency’s inspector general. Another agency had multiple database update-related vulnerabilities dating back to 2009. In addition, 17 of 24 agencies had weaknesses in authorizing, testing, approving, tracking, and controlling system changes. For example, most of the system change request records reviewed by 1 agency’s independent auditor did not include the proper approvals for the system change. Segregation of duties: In fiscal year 2012, 18 of 24 agencies had weaknesses in the controls intended to prevent one individual from controlling all critical stages of a process, which is often achieved by splitting responsibilities between two or more organizational groups. For example, at 1 agency, excessive system access was granted to users of at least seven systems and may have allowed users to perform incompatible duties. The same agency also did not have an effective process for monitoring its systems for users with the ability to perform these incompatible duties. Illustrating the extent to which weaknesses affect the 24 major federal agencies, inspectors general at 22 of 24 agencies cited information security as a major management challenge for their agency, and 19 agencies reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting in fiscal year 2012. Until all agencies properly document and implement policies and procedures, and implement recommendations made by us and inspectors general to correct weaknesses identified, they may not be able to effectively reduce risk to their information and information systems, and the information security practices that are driven by these policies and procedures may be applied inconsistently. FISMA requires an agency’s information security program to include plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. According to NIST, the purpose of the system security plan is to provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements. The first step in the system security planning process is to categorize the system based on the impact to agency operations, assets, and personnel should the confidentiality, integrity, and availability of the agency information and information systems be compromised. This categorization is then used to determine the appropriate security controls needed for each system. Another key step is selecting a baseline of security controls for each system and documenting those controls in the security plan. In fiscal years 2011 and 2012, OMB asked inspectors general to report on whether their agency appropriately categorized information systems and selected appropriate baseline security controls. Although a few inspectors general reported weaknesses in their agency’s process for categorizing information systems, 21 of 24 reported that agencies appropriately categorized them in fiscal years 2011 and 2012. In addition, in fiscal years 2011 and 2012, 18 of 24 inspectors general also stated that agencies selected an appropriately tailored set of baseline security controls. However, inspectors general at 19 of 24 agencies reported that security plans were not always complete or properly updated. For example, 11 system security plans at 1 agency did not meet the minimum security requirements required by NIST 800-53.agency were not consistently updating system security plans to reflect the current operating environment. Further, 2 of the 16 system security plans reviewed at another agency had not been updated within the required 3- year period. Until agencies appropriately develop and update system security plans and implement recommendations made by us and inspectors general to correct weaknesses identified, they may face an increased risk that officials will be unaware of system security requirements and that controls are not in place. FISMA requires agencies to provide security awareness training to personnel, including contractors and other users of information systems that support the operations and assets of the agency. Training is intended to inform agency personnel of the information security risks associated with their activities, and their responsibilities in complying with agency policies and procedures designed to reduce these risks. FISMA also requires agencies to train and oversee personnel with significant security responsibilities for information security with respect to those responsibilities. Providing training to agency personnel is critical to securing information and information systems because people are one of the weakest links in attempts to secure systems and networks. In fiscal years 2011 and 2012, OMB required agencies to report on the number of network users that were provided and successfully completed security awareness training for that year. Agencies were also required to report on the number of network users and other staff with significant security responsibilities that were provided specialized training. In fiscal year 2012, 12 of the 24 agencies provided annual security awareness training to at least 90 percent of their network users, which is a notable decline from fiscal year 2011, in which 22 of 24 agencies provided training to at least 90 percent of their users. Inspectors general at 17 of 24 agencies also reported weaknesses in security awareness programs, including agencies’ ability to track the number of system users provided training that year. For example, 5 of 24 inspectors general reported that their agency’s process for identifying and tracking the status of security awareness training was not adequate or in accordance with government policies, an improvement over 10 of 24 in fiscal year 2011. To illustrate, in fiscal year 2011, 1 agency could not identify evidence of security awareness training for over 12 percent of system users at three component agencies. Another agency lacked a process to ensure all contractors were identified and provided with security awareness training in fiscal year 2012. Without sufficiently trained security personnel, security lapses are more likely to occur and could contribute to further information security weaknesses. In fiscal year 2012, 16 of 24 agencies provided specialized training to at least 90 percent of their users with significant security responsibilities, a slight increase from 15 of 24 in fiscal year 2011. In addition, inspectors general reported in fiscal year 2012 that 22 of 24 agencies established a specialized training program that complied with FISMA, an improvement over fiscal year 2011, in which half of the major federal agencies had established such a program. Further, in fiscal year 2012, 19 of 24 inspectors general reported that their agency’s mechanism for tracking individuals who need specialized training was adequate, a slight improvement from fiscal year 2011, in which 17 of 24 reported adequate tracking. Although the number of agencies implementing specialized training programs increased, 16 of 24 inspectors general identified weaknesses with such programs in fiscal year 2012. For example, 1 agency had not yet defined “significant information security responsibilities” in order to identify those individuals requiring specialized training. Another agency’s specialized training process was ad hoc and everyone with significant security responsibilities had taken the same training course, not one that was tailored for their specific job roles. While agencies have made progress in implementing specialized training programs that comply with FISMA, without tailoring training to specific job roles agencies are at increased risk that individuals with significant security responsibilities may not be adequately prepared to perform their specific responsibilities in protecting the agency’s information and information systems. FISMA requires that federal agencies periodically test and evaluate the effectiveness of their information security policies, procedures, and practices as part of implementing an agency-wide security program. This testing is to be performed with a frequency depending on risk, but no less than annually. Testing should include management, operational, and technical controls for every system identified in the agency’s required inventory of major systems. This type of oversight is a fundamental element that demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results are used to improve security. In recent years, the federal government has been moving toward implementing a more frequent control testing process called continuous monitoring. In March 2012, the White House Cybersecurity Coordinator announced that his office, in coordination with experts from DHS, the Department of Defense (DOD), and OMB, had identified continuous monitoring of federal information systems as a cross-agency priority area for improving federal cybersecurity. According to NIST, the goal of continuous monitoring is to transform the otherwise static test and evaluation process into a dynamic risk mitigation program that provides essential, near real-time security status and remediation. In February 2010, NIST included continuous monitoring as one of six steps in its risk management framework described in NIST special publication 800-37. In addition, in September 2011 NIST published special publication 800-137 to assist organizations in the development of a continuous monitoring strategy and the implementation of a continuous monitoring program that provides awareness of threats and vulnerabilities, visibility into organizational assets, and the effectiveness of implemented security controls. The majority of federal agencies implemented elements of test and evaluation programs in fiscal years 2011 and 2012. For fiscal year 2012, 17 of 24 inspectors general reported that agencies assessed controls using appropriate assessment procedures to determine the extent to which controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting the security requirements for the system. However, 17 of 24 inspectors general also identified weaknesses in agencies’ processes for testing and evaluating identified controls. For example, 10 of 23 agencies did not monitor information security controls on an ongoing basis in fiscal year 2012. According to DHS, monitoring information security controls includes assessing control effectiveness, documenting changes to the system or its environment of operation, conducting security impact analyses of the associated changes, and reporting the security state of the system to designated organizational officials. One agency had not performed ongoing assessments of selected security controls on nearly 10 percent of its systems in fiscal year 2012. Another agency had not met the basic test and evaluation requirement for the past 5 years, and this was the major reason the agency’s inspector general classified its information security governance as a material weakness for financial reporting. The identified weaknesses in test and evaluation programs could limit agencies’ awareness of vulnerabilities in their critical information systems. According to OMB’s annual report to Congress, agencies reported improvements in fiscal year 2012 in implementing tools that provided automated continuous monitoring capabilities for vulnerability, configuration, and asset management for the agency’s information systems.goal of 80 percent for implementing an automated capability to assess vulnerability, configuration, and asset management information for agencies’ information technology assets in fiscal year 2012. According to OMB, 17 of 24 major federal agencies reported at least 80 percent implementation of this capability for asset and configuration management, and 16 of 24 reported at least 80 percent implementation of this capability for vulnerability management. In addition, as figure 2 illustrates, most agencies reported an overall improvement in the percentage of information technology assets with these automated capabilities from fiscal year 2011 to 2012. Specifically, 12 agencies increased the percentage of information technology assets with automated capabilities for asset management, 18 agencies increased the percentage of information technology assets with automated capabilities for configuration management, and 14 agencies increased the percentage of information technology assets with automated capabilities for vulnerability management. The annual DHS reporting metrics established a minimum telecommunication connections and ensure a set of baseline security capabilities for situational awareness and enhanced monitoring. Continuous monitoring of federal information systems: Transform the otherwise static security control assessment and authorization process into a dynamic risk mitigation program that provides essential, near real-time security status and remediation, increasing visibility into system operations and helping security personnel make risk management decisions based on increased situational awareness. Strong authentication: Increase the use of federal smartcard credentials such as Personal Identity Verification and Common Access Cards that provide multifactor authentication and digital signature and encryption capabilities, authorizing users to access federal information systems with a higher level of assurance. CyberStat reviews: In fiscal year 2011, DHS, along with OMB and National Security Staff (NSS),of seven federal agencies. According to OMB, these CyberStat reviews were face-to-face, evidence-based meetings to ensure agencies were accountable for their cybersecurity posture and assist them in developing focused strategies for improving their information security posture in areas where they were facing challenges. According to OMB, these reviews resulted in a prioritized action plan for the agency to improve overall agency performance. CyberStat reviews were also conducted for seven agencies in fiscal year 2012. According to OMB, these meetings focused heavily on the three administration priorities and not specifically on FISMA requirements. The top challenges raised by agencies in fiscal year 2012 included the need to upgrade legacy systems to support new capabilities, acquire skilled staff, and ensure that the necessary financial resources were allocated to the administration’s priority initiatives for cybersecurity. According to DHS, OMB and NSS are now requiring a CyberStat review of all 24 major federal agencies for fiscal year 2013—a new process that began in December 2012. However, in May 2013 OMB officials stated that while conducting CyberStat reviews of all 24 agencies is their goal, they would not meet that goal this year, and in July 2013 DHS officials stated that they do not have the capacity to meet with all 24 agencies in 1 fiscal year. conducted the first CyberStat reviews CIO and CISO interviews: In fiscal year 2011, DHS began interviewing agency CIO’s and chief information security officers (CISO) on their agency’s cybersecurity posture. According to OMB, these interviews had three distinct goals: (1) assessing the agency’s FISMA compliance and challenges, (2) identifying security best practices and raising awareness of FISMA reporting requirements, and (3) establishing meaningful dialogue with the agency’s senior leadership. Baseline metrics: Many of the fiscal year 2010 metrics were carried over into fiscal year 2011, which established a baseline and provided an opportunity to measure progress in federal agencies and the federal government as a whole. According to OMB, establishing these baseline metrics has improved their understanding of the current cybersecurity posture and helped to drive accountability for improving the collective effectiveness of the federal government’s cybersecurity capabilities. In our 2009 report on efforts needed to improve federal performance measures, we found that leading organizations and experts have identified different types of measures that are useful in helping to achieve information security goals: Compliance measures, which are used to determine the extent to which security controls were in place that adhered to internal policies, industry standards, or other legal or regulatory requirements. These measures are effective at pointing out where improvements are needed in implementing required policies and procedures but provide only limited insight into the overall performance of an organization’s information security program. Control effectiveness measures, which characterize the extent to which specific control activities within an organization’s information security program meet their objectives. Rather than merely capturing what controls are in place, such measures gauge how effectively the controls have been implemented. These categories are consistent with those laid out by NIST in its information security performance measurement guide, which serves as official guidance on information security measures for federal agencies and which OMB requires agencies to follow. In addition, information security experts, as well as NIST guidance, indicated that organizations with increasingly effective information security programs should migrate from predominantly using compliance measures toward a balanced set of measures to include various types of measures. Further, we found that measures generally have key characteristics and attributes. For example, measures are most meaningful to an organization when they, among other things, had targets or thresholds for each measure to track progress over time and are linked to organizational priorities. In our report we recommended that OMB, among other things, revise annual reporting guidance to agencies to require (1) reporting on a balanced set of measures, including measures that focus on the effectiveness of control activities and program impact; and (2) inclusion of all key attributes in the development of measures. OMB concurred with our recommendations and revised its fiscal year 2010 reporting instructions and metrics accordingly. For fiscal years 2011 and 2012, DHS, as part of its recently assigned responsibilities for FISMA oversight, developed a revised set of reporting metrics to assess agencies’ compliance with the act. Specifically, inspectors general were asked to report on 11 information system control categories, and agency chief information officers were asked to report on 12 categories, as indicated in table 6. For each category, inspectors general and chief information officers were required to answer a series of questions related to the agency’s implementation of these controls. The metrics developed for inspectors general and chief information officers by DHS for fiscal year 2012 address compliance with six of the eight components of an information security program as required by FISMA. Specifically, the metrics address the establishment of information security policies and procedures; security training; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices; remedial actions to address information security deficiencies; procedures for detecting, reporting, and responding to security incidents; and continuity of operations plans and procedures. However, these metrics do not specifically discuss two of the eight components—agencies’ processes for conducting risk assessments or developing security plans. For example, while the metrics ask inspectors general to report on their agency’s policies and procedures for risk management and its overall risk management program, they do not specifically require inspectors general or agency chief information officers to report on whether the agency has periodically assessed the risk and magnitude of harm that could result from the compromise of information and information systems that support the operations and assets of the agency, as required by FISMA. The metrics also do not specifically require agencies or inspectors general to comment on the development, documentation, and implementation of subordinate plans for providing adequate security for networks, facilities, and systems or groups of systems, as appropriate. Without measuring agencies’ compliance with these FISMA requirements, DHS, OMB, and other stakeholders will have less insight into the implementation of agencies’ information security programs. As highlighted in our 2009 report,measures in addition to compliance measures can provide additional insight into how effectively control activities are meeting their security objectives. According to OMB instructions for FISMA reporting, the DHS metrics for inspectors general were also designed to measure the effectiveness of agencies’ information security programs, and OMB relied on responses by inspectors general to these metrics to gauge the effectiveness of information security programs. the use of control effectiveness While some of the metrics for inspectors general were intended to measure effectiveness, many of them did not. The 2012 metrics ask inspectors general to determine whether or not their agency has established a program for each of the 11 information system control categories, and whether or not these programs include key security practices. Several of these metrics were intended to reflect the effectiveness of agencies’ program practices within the control categories. For example, for the incident response and reporting category, inspectors general were asked whether their agency responded to and resolved incidents in a timely manner and whether it reported incidents to US- CERT and law enforcement within established time frames. However, many of the metrics for inspectors general did not provide a means of assessing the effectiveness of the program for control categories. Specifically, the metrics focus on the establishment of the program but do not require inspectors general to characterize the extent to which these program components meet their objectives. For each control category, the metrics ask whether the agency established an enterprise-wide program that was consistent with FISMA requirements, OMB policy, and applicable NIST guidelines. However, these metrics do not allow the inspectors general to respond on how effectively the program is operating. Instead, they capture whether programs have been established. The lack of effectiveness metrics has led to inconsistencies in inspector general reporting. The following examples illustrate that while inspectors general reported, via responses to the DHS metrics, that their agency had established programs for implementing control categories, they also reported continuing weaknesses in those controls in the same year. One inspector general responded to the metric for plans of action and milestones (i.e., remediation program) that its agency had a remediation program in place that is consistent with FISMA requirements, tracks and monitors weaknesses, includes remediation plans that are effective at correcting weaknesses, remediates weaknesses in a timely manner, and adheres to milestone remediation dates. However, the inspector general audit of the agency’s information security program identified 4,377 unremediated weaknesses, and the resulting report stated that component agencies were not entering or tracking all information security weaknesses. Another inspector general reported in response to the contractor systems metric that its agency updates the inventory of contractor systems at least annually; however, a report we issued on this agency’s information security program identified a weakness in the accuracy of the agency’s inventory of systems, including those systems operated by contractors. Specifically, the agency provided three different information systems inventories and none of them had the same information, reducing the agency’s assurance that information systems were properly accounted for. In response to the configuration management metric, an inspector general at another agency stated that software scanning capabilities were fully implemented. However, the inspector general’s independent evaluation showed that although the systems reviewed had the capability for software scanning, none of the systems were being fully scanned for vulnerabilities in accordance with agency requirements. Without fully or consistently measuring the effectiveness of controls, DHS, OMB, and other stakeholders will lack insight into the performance of agencies’ information security programs. In October 2011, we determined that of the 31 metrics for CIOs for fiscal year 2010, 30 of them did not include performance targets that would allow agencies to track progress over time. We recommended that the Director of OMB incorporate performance targets for metrics in annual FISMA reporting guidance to agencies and inspectors general. OMB generally agreed with our recommendation. In fiscal year 2012, DHS included explicit performance targets for metrics that were linked to the three cross-agency cybersecurity priority goals discussed earlier. For example, agencies were to ensure that 75 percent of all users were required to use personal identity verification cards to authenticate to their systems. While this partially addresses our previous recommendation, no explicit targets were established for metrics that did not relate to the three cross-agency cybersecurity priority goals, such as metrics related to data protection, incident management, configuration management, incident response and reporting, and remediation programs. DHS officials acknowledged that these targets were needed, but that agency resources and the lack of DHS authority to establish targets have prevented the department from establishing additional targets. The officials also stated that only certain targets were included at this time in order to focus agency resources and senior leadership attention on those items that they believed would create the most change in federal information security. They added that additional targets will be included over time. Developing targets for additional metrics, as we previously recommended, will enable agencies and oversight entities to better gauge progress in securing federal systems. In June 2013, the DHS inspector general issued a report on the results of its evaluation of whether DHS has implemented its additional cybersecurity responsibilities effectively to improve the security posture of the federal government. It found that DHS had not developed a strategic implementation plan that describes its cybersecurity responsibilities or establishes specific time frames and milestones to provide a clear plan of action for fulfilling those responsibilities. The report also stated that DHS had not established performance metrics to measure and monitor its progress in accomplishing its mission and goals. According to the inspector general, management turnover has hindered DHS’s ability to develop a strategic implementation plan. Specifically, three key individuals essential to the DHS division overseeing FISMA compliance have left the agency since July 2012. The inspector general recommended that DHS coordinate with OMB to develop a strategic implementation plan that identifies long-term goals and milestones for federal agency FISMA compliance. In addition, the inspector general found that some agencies indicated that DHS could make further improvements to the clarity and quality of the FISMA reporting metrics. Specifically, five agencies indicated that some of the fiscal year 2012 and 2013 metrics were unclear and should be revised. In addition, two agencies stated that the reporting process was a strain on personnel resources because there are too many metrics. Some agency officials we interviewed echoed the need for clearer metrics and agreed that the process was time consuming. The inspector general recommended that DHS improve communication and coordination with federal agencies by providing additional clarity regarding the FISMA reporting metrics. DHS agreed with the recommendations and officials stated that they are developing a strategic plan and documenting a methodology for metric development with the specific aim of improving the quality of the metrics, but did not state when the plan would be completed. FISMA requires that agencies have an independent evaluation performed each year to evaluate the effectiveness of the agency’s information security program and practices. FISMA also requires this evaluation to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems; and (2) an assessment of compliance with FISMA requirements, and related information security policies, procedures, standards, and guidelines. For agencies with inspectors general, FISMA requires that these evaluations be performed by the inspector general or an independent external auditor. Lastly, FISMA requires that each year the agencies submit the results of these evaluations to OMB and that OMB summarize the results of the evaluations in its annual report to Congress. According to OMB, instructions for FISMA reporting, the metrics for inspectors general were designed to measure the effectiveness of agencies’ information security programs and OMB relied on responses by inspectors general to gauge the effectiveness of information security program processes. Our review of reports issued by inspectors general from the 24 major federal agencies in fiscal years 2011 and 2012 show that all 24 inspectors general conducted evaluations, identified weaknesses in agency information security programs and practices, and included recommendations to address the weaknesses. Inspectors general responded to the DHS-defined metrics for reporting on agency implementation of FISMA requirements, and most inspectors general also issued a more detailed audit report discussing the results of their evaluation of agency policies, procedures, and practices. One inspector general responded to the DHS metrics, but chose not to issue an additional detailed report on the results of the evaluation in fiscal year 2012. Three other inspectors general issued reports that summarized weaknesses contained in multiple reports throughout the reporting period. To fulfill its responsibility to provide standards and guidance to agencies on information security, NIST has produced numerous information security standards and guidelines as well as updated existing information security publications. In April 2013, NIST released the fourth update of a key federal government computer security control guide, Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations. According to NIST, the update was motivated by expanding threats and the increasing sophistication of cyber attacks. According to NIST, over 200 controls were added to help address these expanding threats and vulnerabilities. Examples include controls related to mobile and cloud computing; applications security; trustworthiness, assurance, and resiliency of information systems; insider threat; supply chain security; and the advanced persistent threat. As with previous versions of special publication 800-53, the controls contained in the latest update, according to NIST, can and should be tailored for specific needs of the agency and based on risk. In addition to this guide, NIST also issued and revised several other guidance documents. Table 7 lists recent NIST updates and releases. In August 2012, NIST also published the National Cybersecurity Workforce Framework, which established a common taxonomy and lexicon that is to be used to describe all cybersecurity work and workers regardless of where or for whom the work is performed. This framework was developed as part of a larger effort to educate, recruit, train, develop, and retain a highly qualified workforce in the federal government as well as other sectors. In addition, in partnership with the Department of Defense, the intelligence community, and the Committee on National Security Systems, NIST developed a unified information security framework to provide a common strategy to protect critical federal information systems and associated infrastructure for national security and non-national security systems. Historically, information systems in civilian agencies have operated under different security controls than military and intelligence systems. According to NIST, the framework provides standardized risk management policies, procedures, technologies, tools, and techniques that can be applied by all federal agencies. See table 8 for a list of publications that make up the framework. According to NIST officials, an update to SP 800-53A is expected to be released this year. weaknesses continued to be identified for all of the components of an information security program, and we and agency inspectors general have made numerous recommendations to address these weaknesses and strengthen agencies’ programs. These weaknesses show that information security continues to be a major challenge for federal agencies, and addressing these weaknesses is essential to establishing a robust security posture for the federal government. Until steps are taken to address these persistent challenges, overall progress in improving the nation’s cybersecurity posture is likely to remain limited. Moreover, while OMB and DHS have continued to oversee agencies’ FISMA implementation, they have not included all FISMA requirements; developed effectiveness measures; or, as we have recommended, established performance targets for many of the metrics agencies and inspectors general use to report on agencies’ progress, making it more difficult to accurately assess the extent to which agencies are effectively securing their systems. Without more relevant metrics, OMB and DHS may lack adequate visibility into the federal government’s information security posture. We recommend that the Director of the Office of Management and Budget, in coordination with the Secretary of Homeland Security, take the following actions to enhance the usefulness of the annual FISMA reports and to provide additional insight into agencies’ information security programs: develop compliance metrics related to periodic assessments of risk and development of subordinate security plans, and develop metrics for inspectors general to report on the effectiveness of agency information security programs. We provided a draft of this report to OMB, DHS, the Departments of Commerce, Education, Energy, and Transportation; the Environmental Protection Agency; and the Small Business Administration. The audit liaison for OMB responded via e-mail on September 10, 2013, that OMB generally agreed with our recommendations, but provided no other comments. In written comments provided by its Director of the Departmental GAO-Office of Inspector General Liaison Office (reproduced in appendix III), DHS concurred with both of our recommendations and identified actions it has taken or plans to take to implement our recommendations. For example, the department stated that it plans to work with OMB to include metrics specific to periodic assessments of risk and development of subordinate security plans, as well as to provide OMB with recommendations for metrics that inspectors general can use that focus on measuring the effectiveness of agency information security programs. According to DHS, these actions should be completed by the end of fiscal year 2014. The audit liaison for NIST, within the Department of Commerce, provided technical comments via e-mail on September 4, 2013, and we incorporated them where appropriate. The audit liaisons for the Departments of Education, Energy, and Transportation; the Environmental Protection Agency; and the Small Business Administration responded via e-mail that the agencies did not have any comments. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of Homeland Security, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objective was to evaluate the extent to which the requirements of the Federal Information Security Management Act (FISMA) have been implemented, including the adequacy and effectiveness of agency information security policies and practices. We reviewed and analyzed the provisions of the act to identify agency, Office of Management and Budget (OMB), and National Institute of Standards and Technology (NIST) responsibilities for implementing, overseeing, and providing guidance for agency information security to evaluate federal agencies’ implementation of FISMA requirements. To assist in assessing the adequacy and effectiveness of agencies’ information security policies and practices, we reviewed and analyzed FISMA data submissions and annual FISMA reports, as well as information security-related reports for each of the 24 major federal agencies based on work conducted in fiscal years 2011 and 2012 by us, agencies, and inspectors general. We reviewed and summarized weaknesses identified in those reports using FISMA requirements as well as the security control areas defined in our Federal Information System Controls Audit Manual. Additionally, we analyzed, categorized, and summarized chief information officer and inspector general annual FISMA data submissions for fiscal years 2011 and 2012. Further, we compared weaknesses identified by inspectors general to the inspector general responses to the Department of Homeland Security (DHS)-defined metrics on the effectiveness of agency controls. To assess the reliability of the agency-submitted data we obtained via CyberScope, we reviewed supporting documentation that agencies provided to corroborate the data. We also conducted an assessment of the CyberScope application to gain an understanding of the data required, related internal controls, missing data, outliers, and obvious errors in submissions. We also reviewed a related DHS inspector general report that discussed its evaluation of the internal controls of CyberScope. In addition, we selected 6 agencies to gain an understanding of the quality of processes in place to produce annual FISMA reports. To select these agencies, we sorted the 24 major agencies from highest to lowest using the total number of systems the agencies reported in fiscal year 2011; separated them into even categories of large, medium, and small agencies; then selected the median 2 agencies from each category. These agencies were the Departments of Education, Energy, Homeland Security, and Transportation; the Environmental Protection Agency; and the Small Business Administration. We conducted interviews and collected data from the inspectors general and agency officials from the selected agencies to determine their process to ensure the reliability of data submissions. Based on this assessment, we determined that the data were sufficiently reliable for our work. We also examined OMB and DHS FISMA reporting instructions and other guidance related to FISMA to determine the steps taken to evaluate the adequacy and effectiveness of agency information security programs. In addition, we interviewed officials from OMB, DHS’s Federal Network Resilience Division, and NIST. We did not evaluate the implementation of DHS’s FISMA-related responsibilities assigned to it by OMB. We conducted this performance audit from February 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FISMA assigns a variety of responsibilities for federal information security to OMB, agencies, inspectors general, and NIST, which are described below. FISMA states that the Director of the Office of Management and Budget (OMB) shall oversee agency information security policies and practices, including: developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; overseeing agency compliance with FISMA; and reviewing at least annually and approving or disapproving, agency information security programs. FISMA also requires OMB to report to Congress no later than March 1 of each year on agency compliance with the requirements of the act. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agency-wide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, FISMA requires information security programs to include, among other things: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, agencies must produce an annually updated inventory of major information systems (including major national security systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. FISMA also requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In addition, agency heads are required to report annually the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation needs to be reported to OMB. Under FISMA, the inspector general for each agency shall perform an independent annual evaluation of the agency’s information security program and practices to determine the effectiveness of such program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an inspector general, evaluations of non-national security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. Under FISMA, the National Institute of Standards and Technology (NIST) is tasked with developing, for systems other than for national security, standards and guidelines that must include, at a minimum: (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines for detection and handling of information security incidents. The law also assigns other information security functions to NIST including: providing technical assistance to agencies on elements such as compliance with the standards and guidelines, and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost-effective information security. In addition, FISMA requires NIST to prepare an annual report on activities undertaken during the previous year, and planned for the coming year, to carry out responsibilities under the act. In addition to the individual named above, Anjalique Lawrence (assistant director), Cortland Bradford, Wil Holloway, Nicole Jarvis, Linda Kochersberger, Lee McCracken, Zsaroq Powe, David Plocher, Jena Sinkfield, Daniel Swartz, and Shaunyce Wallace made key contributions to this report. | FISMA requires the Comptroller General to periodically report to Congress on agency implementation of the act's provisions. To this end, this report summarizes GAO's evaluation of the extent to which agencies have implemented the requirements of FISMA, including the adequacy and effectiveness of agency information security policies and practices. To do this, GAO analyzed its previous information security reports, annual FISMA reports and other reports from the 24 major federal agencies, reports from inspectors general, and OMB's annual reports to Congress on FISMA implementation. GAO also interviewed agency officials at OMB, DHS, NIST, and 6 agencies selected using the total number of systems the agencies reported in fiscal year 2011. In fiscal year 2012, 24 major federal agencies had established many of the components of an information security program required by The Federal Information Security Management Act of 2002 (FISMA); however, they had partially established others. FISMA requires each federal agency to establish an information security program that incorporates eight key components, and each agency inspector general to annually evaluate and report on the information security program and practices of the agency. The act also requires the Office of Management and Budget (OMB) to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies and the National Institute of Standards and Technology to develop security standards and guidelines. The extent to which agencies implemented security program components showed mixed progress from fiscal year 2011 to fiscal year 2012. For example, according to inspectors general reports, the number of agencies that had analyzed, validated, and documented security incidents increased from 16 to 19, while the number able to track identified weaknesses declined from 20 to 15. GAO and inspectors general continue to identify weaknesses in elements of agencies' programs, such as the implementation of specific security controls. For instance, in fiscal year 2012, almost all (23 of 24) of the major federal agencies had weaknesses in the controls that are intended to limit or detect access to computer resources. OMB and the Department of Homeland Security (DHS) continued to develop reporting metrics and assist agencies in improving their information security programs; however, the metrics do not evaluate all FISMA requirements, such as conducting risk assessments and developing security plans; are focused mainly on compliance rather than effectiveness of controls; and in many cases did not identify specific performance targets for determining levels of implementation. Enhancements to these metrics would provide additional insight into agency information security programs. |
To determine the extent to which the current system of appointing Reserve Bank directors effectively ensures that they are elected without discrimination on the basis of race, creed, color, sex, or national origin, and that, for some directors, they are elected with due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumers, as required by section 4 of the Federal Reserve Act, we reviewed the Reserve Banks’ processes for identification, nomination, and selection of directors. We created a descriptive profile of the demographic characteristics, including race, gender, and industry, of Reserve Bank directors from 2006 through 2010. We used (1) the demographic characteristics of directors obtained from the Federal Reserve Board, and (2) the demographic characteristics of executives who would likely meet the criteria for potential directors using Equal Employment Opportunity Commission (EEOC) data. We determined whether the diversity trends of Reserve Bank directors are generally consistent with the trends illustrated by the Employer Information Report (EEO-1) data. The EEO-1 data represent the pool of potential candidates with the requisite skills and experience from which the Federa Reserve generally selects directors. To assess the reliability of the Federal Reserve Board data, we interviewed Federal Reserve Board staf about steps they took to maintain the integrity and reliability of the database. To assess the reliability of the EEO-1 data, we reviewed documentation related to the data and interviewed EEOC officials on the methods used to collect data and checks performed to ensure data reliability. We believe that these data are sufficiently reliable for the purpose of our analysis. Also, to obtain baseline information from all current directors on a cross section of high-level issues, we conducted a web-based survey of the 105 Reserve Bank directors that served for thefull year during 2010. Of the 105 directors surveyed, 91 responded to the survey overall. However, the number of responses to individual questio ns varied. We collected and summarized additional information from thesedirectors, such as their other board positions, prior employment, an education. For a full description of the methodology of the survey, see appendix II. To assess the extent to which Federal Reserve Banks’ processes for identification, nomination, and selection of directors resu in diversity, we reviewed documentation on the process and interview officials from Federal Reserve Board and Reserve Banks. To examine whether there are actual or potential conflicts of interests created when certain directors of Reserve Banks are elected by member banks, we reviewed and summarized the selection procedures for Reserve Bank directors, and their roles and responsibilities identified in current Federal Reserve System documents and those included in the Federal Reserve Act. We surveyed all Reserve Bank directors who served for the full year during 2010 to collect their perception of their roles and responsibilities and to determine whether they are aware of any past or present conflict of interest. Also, we interviewed selected Reserve Bank directors and Reserve Bank officials from each Reserve Bank to collect information on directors’ roles and responsibilities, any conflict of interest concerns and procedures for addressing the appearance of or actual conflicts, and potential changes to Reserve Bank governance. Specifically, at each of the 12 Reserve Banks, we interviewed at least one director from each class (A, B, and C), all board and audit committee chairs, the president, general counsel or ethics officer, and corporate secretary. In addition, to identify any discussions on instances of potential or actual conflicts of interest during board meetings, we reviewed board minutes for each of the 12 Reserve Banks for the period of November 2007 to October 2010. To address the Reserve Bank directors’ involvement in the establishment and operations of the Federal Reserve emergency programs, we leveraged our work from the recent Federal Reserve Emergency Program review, which conducted related work under the Dodd-Frank Act. We reviewed relevant documents from each of the 12 Reserve Banks, including bylaws, procurement policies and any policies for waivers to the Federal Reserve Board’s policies on director eligibility, qualifications, and rotation. We also reviewed Reserve Bank board minutes to help determine the extent of the directors’ involvement in any activities associated with the emergency programs and supervision and regulation matters. In addition, we interviewed a sample of directors and relevant Reserve Bank officials as noted earlier to determine the directors’ involvement in the implementation and operation of the programs. To compare Reserve Bank governance practices with the practices of selected organizations, we reviewed literature on current best practices for governance within major financial institutions, analyzed similar institutions in other countries or the United States to evaluate best practices or alternative structures, and relied on the results of our work done for our other objectives. We examined how the Federal Reserve System’s governance practices compare with relevant practices at selected foreign central banks, a self-regulatory organization, a government-sponsored enterprise, and several large bank holding companies. For the foreign central banks, we contacted officials at foreign central banks in Australia, Canada, the European Union, and the United Kingdom to obtain governance documents and we analyzed governance policies and practices in order to compare governance of the Reserve Banks with governance of other foreign central banks. We spoke to academic researchers knowledgeable about central bank governance. We verified the accuracy of our analysis and interpretations of governance documents by requesting comments on the relevant draft sections from each of the central banks included in our review. We incorporated their comments as appropriate. For the self-regulatory organization and the government-sponsored entity, we identified and analyzed the relevant governance policies and practices of the Financial Industry Regulation Authority (FINRA) and for the cooperative system, the Federal Home Loan Banks (FHLBanks). To verify the accuracy of our analysis, we spoke with officials from these entities and obtained comments on the relevant sections of a draft of this report. Finally, for private corporations, we interviewed an industry group and some academic researchers knowledgeable about corporate governance and analyzed the governance practices of the 10 largest bank holding companies and compared them with the governance policies and practices of the previously discussed organizations. We conducted this performance audit from July 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Federal Reserve Act of 1913 established the Federal Reserve System as the country’s central bank. The Federal Reserve Act made the Federal Reserve System an independent, decentralized bank to better ensure that monetary policy would be based on a broad economic perspective from all regions of the country. The Federal Reserve Board has defined the term “monetary policy” as the actions undertaken by a central bank, such as the Federal Reserve System, to influence the availability and cost of money and credit to help promote national economic goals. The Federal Reserve Act of 1913, as amended, gave the Federal Reserve System responsibility for setting monetary policy. The Federal Reserve System consists of three parts: the Federal Reserve Board, Reserve Banks, and the FOMC. The Federal Reserve Board is a federal agency located in Washington, D.C., that is responsible for maintaining the stability of financial markets; supervising financial, bank, and thrift holding companies, state-chartered banks that are members of the Federal Reserve System, and the U.S. operations of foreign banking organizations; establishing monetary policy; and providing general supervision over the operations of the Reserve Banks. The top officials of the Federal Reserve Board are the seven members of the Board of Governors who are appointed by the President and confirmed by the U.S. Senate. Although the Federal Reserve Board is required to report to Congress on its activities, its decisions do not have to be approved by either the President or Congress. The Federal Reserve System is divided into 12 districts. Each district is served by a regional Reserve Bank. Most Reserve Banks have one or more branches, adding to a total of 24 branches (see fig. 1). Unlike the Federal Reserve Board, the Reserve Banks are not federal agencies. Each Reserve Bank is a federally chartered corporation with a board of directors and member banks who are stockholders in the Reserve Banks. The membership of each Reserve Bank board of directors is determined by a process established by statute that is intended to ensure that each bank board represents both the public and member banks in its district. Under the Federal Reserve Act, Reserve Banks are subject to the general supervision of the Federal Reserve Board. The Federal Reserve Board has delegated some of its supervisory responsibilities to the Reserve Banks, such as responsibility for examining bank and thrift holding companies and state member banks under rules, regulations and policies established by the Federal Reserve Board. The Federal Reserve Act authorizes the Reserve Banks to make discount window loans, in accordance with the rules and regulations prescribed by the Federal Reserve Board, and to execute monetary policy operations at the direction of the FOMC. The Reserve Banks also provide payment services, such as check clearing and wire transfers, to depository institutions, the Treasury, and government agencies. The provision of these payment services to depository institutions is subject to the full cost recovery provisions of the Monetary Control Act of 1980. Reserve Banks also provide cash services to financial institutions and serve as the Treasury’s Fiscal Agent. The FOMC plays a central role in the execution of the Federal Reserve System’s monetary policy mandate to promote price stability and maximum employment. The FOMC consists of the seven members of the Board of Governors, the President of the Federal Reserve Bank of New York, and four other Reserve Bank presidents who serve on a rotating basis. All presidents participate in FOMC deliberations even though not all vote. The FOMC is responsible for directing open market operations to influence the total amount of money and credit available in the economy. The Federal Reserve Bank of New York (FRBNY) carries out FOMC directives on open market operations by engaging in purchases or sales of certain securities, typically U.S. government securities, in the secondary market. The Federal Reserve Board and the Reserve Banks are subject to an annual independent audit of their financial statements by a public accounting firm. In addition, each Reserve Bank has an internal auditor who is responsible to the Reserve Bank’s board of directors. The Federal Reserve Board’s Division of Reserve Bank Operations and Payment Systems (RBOPS) performs periodic examinations on 4 of 12 Reserve Banks each year on a range of oversight activities and assesses compliance with Federal Reserve Board policies. The Federal Reserve Board’s Office of Inspector General also conducts audits, reviews, and investigations related to the Federal Reserve Board’s programs and operations, including those programs and operations that have been delegated to the Reserve Banks by the Federal Reserve Board. Finally, we may conduct a number of reviews each year to look at specific aspects of the Federal Reserve System’s activities. All national banks—U.S. commercial banks that are chartered by the federal government through the Office of the Comptroller of the Currency—are required to be members of the Federal Reserve System. Banks chartered by the states may elect to become members of the Federal Reserve System if they meet certain requirements set by the Federal Reserve Board. Member banks must subscribe to stock in their Reserve Bank in an amount that is related to the size of the member bank. Holding of the stock does not confer any rights of ownership and the member bank may not sell or trade the Federal Reserve district bank stock. Member banks receive a statutory fixed annual dividend of 6 percent on their stock and may vote for six of the nine members of the board of directors of the Reserve Bank. Governance can be broadly described as the process of providing leadership, direction, and accountability in fulfilling an organization’s mission, meeting objectives, and providing stewardship of an organization’s resources. Because the Reserve Bank boards are supervised by the Federal Reserve Board and their authority is constrained by both provisions of the Federal Reserve Act and guidelines of the Federal Reserve Board, among other things, they are not typical corporate boards of directors. However, Reserve Bank boards are the focal points of the Reserve Banks’ governance framework that also includes the broad oversight of the Federal Reserve Board. The Federal Reserve Act established nine-member boards of directors to govern each of the 12 Reserve Banks. Each board is split equally into three classes. Class A directors represent the member banks, while Class B and C directors represent the public with, as required by the Federal Reserve Act, “due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumers.” As required by the Federal Reserve Act, six of the nine directors, Class A and Class B, are elected by the member banks, and the remaining three, the Class C directors, are appointed by the Federal Reserve Board. Figure 2 illustrates how the directors of the Reserve Banks are chosen and their roles in appointing Reserve Bank presidents. The process for selecting the boards of directors of the Reserve Banks is outlined in the Federal Reserve Act. The Federal Reserve Act requires that the member banks of each Reserve Bank District be classified into three groups consisting of banks of similar capitalization—small, medium, and large. Each group is responsible for one of the three Class A directorships and one of the three Class B directorships. Each member bank in the group may nominate a candidate for an open directorship within its group. Once nominations close, each member bank in the group receives the list of nominees and a ballot to vote in the election. Directors serve 3-year terms, and the terms are staggered so that one position in each class becomes vacant every year. Although directors can be reelected to an indefinite number of terms, the Federal Reserve Board recommends that the Reserve Banks follow a limit of two consecutive appointments for a given director. The Federal Reserve Act does not prescribe how the Federal Reserve Board is to identify and appoint the candidates for Class C directors. Pursuant to the Federal Reserve Act, one Class C director, who must be a person of “tested banking experience,” is designated by the Federal Reserve Board as chairman of the Reserve Bank board of directors, and the Federal Reserve Board also designates another Class C director as deputy chairman. The Federal Reserve Act provides that the chairman of the board, like all Class C directors, cannot be an officer, director, employee, or stockholder of any bank. The Federal Reserve Board policy extends this limitation to prevent affiliations by Class B and Class C directors with any thrift, credit union, bank holding company, foreign bank, and other similar institutions and affiliates. Additionally, the Federal Reserve Act states that Class C directors must have been residents of the district of their Reserve Bank for 2 years prior to appointment. As with the election of Class A and B directors, the appointment of Class C directors is staggered so that one director position becomes vacant every year. The Federal Reserve Board has established a policy of appointing a given Class C director to no more than two terms. See table 1 for a detailed description of the requirements for selection of all three classes of directors. Nine of the 12 Reserve Banks also have branch offices, which provide banking services, and in some cases house supervision employees. The branches are subject to the governance of the Reserve Banks and their boards of directors, as well as to oversight from the Federal Reserve Board. Twenty-three of the 24 branches have boards of seven directors, four of which are appointed by the Reserve Bank and three of which are appointed by the Federal Reserve Board. One branch (Helena) comprised five directors, three of which are appointed by the Reserve Bank, and two of which are appointed by the Federal Reserve Board. The chair of the branch office board is selected from the members appointed by the Federal Reserve Board. This report focuses primarily on the governance practices at the Reserve Banks and not branch offices. The three principal functions of Reserve Bank directors are to (1) participate in the formulation of national monetary and credit policies; (2) oversee the general management of the Reserve Bank, including its branches; and (3) act as a link between the Federal Reserve Bank and the community. The Reserve Bank boards have the ability to influence the nation’s monetary policy in three primary ways (1) by providing input on economic conditions to the Reserve Bank president, which is used by some presidents in their reports to the FOMC about regional economic conditions; (2) by participating in the establishment every 2 weeks of a discount rate recommendation sent to the Federal Reserve Board for its consideration; and (3) for the Class B and C directors, by appointing the Reserve Bank president and first vice president. Beige Books: The Reserve Banks publish a Summary of Commentary on Current Economic Conditions, informally known as the Beige Book, eight times per year. The Beige Book is a compilation of reports on current district economic conditions filed by each Reserve Bank drawing on its network of district contacts. Reserve Banks’ directors’ observations on the economy may be included in the Reserve Bank’s Beige Book report. The Reserve Banks take turns summarizing economic information for the Beige Book and writing the report’s summary. The FOMC and the Federal Reserve Board use the Beige Books—which are published 2 weeks before each FOMC meeting—to inform their decisions on discount rates and the Federal Funds Rate target. Discount rate: The Federal Reserve Act authorizes each Reserve Bank to establish, subject to review and determination by the Federal Reserve Board, discount rates. The statute provides that each Reserve Bank shall establish such rates every 14 days or more often if deemed necessary by the Federal Reserve Board. Reserve Bank directors typically conduct a conference call every 14 days, unless they are holding an in-person meeting, to vote on the discount rate. The rate established by the Reserve Bank must be approved by the Federal Reserve Board. Reserve Bank president: Each Reserve Bank board’s Class B and Class C directors appoint, with the approval of the Federal Reserve Board, the president of their Reserve Bank. The president of the Reserve Bank uses the information (s)he gathers from the Reserve Bank’s board of directors, research department, and a variety of other sources to influence monetary policy through (her)his position on the FOMC. The FOMC sets the Federal Funds Rate target and monitors and directs the Open Market Operations necessary to achieve that rate. All of the 12 Reserve Bank presidents attend and participate in deliberations at each meeting of the FOMC. As noted earlier, the president of FRBNY has a permanent voting position and the other 11 presidents rotate, on an annual basis, among four voting positions on the FOMC. Figure 3 illustrates how the members of the FOMC are selected. The recent financial crisis that began around mid-2007 was the most severe that the United States has experienced since the Great Depression. A number of financial institutions were threatened with failure and some failed. The crisis also affected businesses and individuals, who found it increasingly difficult to obtain credit as cash-strapped banks held on to their assets. By late summer of 2008, the potential ramifications of the financial crisis included the continued failure of financial institutions, increased losses of individual wealth, reduced corporate investments, and further tightening of credit that would exacerbate the emerging global economic slowdown that was beginning to take shape. Between late 2007 and early 2009, the Federal Reserve Board created more than a dozen new emergency programs to stabilize financial markets and authorized the Reserve Banks to provide financial assistance to avert the failures of a few individual institutions. In many cases, the decisions by the Federal Reserve Board, the FOMC, and the Reserve Banks about the authorization of, the initial terms of, and implementation of the Federal Reserve System’s emergency assistance were made over the course of only days or weeks as the Federal Reserve Board sought to act quickly to address rapidly deteriorating market conditions. FRBNY implemented most of these emergency activities under authorization from the Federal Reserve Board. (See app. I for more information on the emergency programs and the Reserve Banks’ involvement in their implementation). According to the U.S. Census Bureau, the U.S. population has become more racially and ethnically diverse in the last 10 years. Between 2000 and 2010, the Asian population experienced the fastest rate of growth and the white population experienced the slowest rate of growth. In the 2010 Census, 97 percent of all respondents (299.7 million) reported only one race. The largest group reported was white (223.6 million), accounting for 72 percent of all people living in the United States. The African-American population was 38.9 million and represented 13 percent of the total population. There were 2.9 million respondents who indicated American Indian and Alaska Native (0.9 percent). Approximately 14.7 million (about 5 percent of all respondents) identified their race as Asian. In 2010, there were 50.5 million Hispanics in the United States, composing 16 percent of the total population. Between 2000 and 2010, the Hispanic population grew by 43 percent—rising from 35.3 million in 2000, when this group made up 13 percent of the total population. The non-Hispanic population grew relatively slower over the decade, about 5 percent. The Federal Reserve Act of 1913 as enacted did not include demographic diversity requirements. The act specified that the three Class A directors were to be chosen by and be representative of the stockholding banks. Further, the three Class B directors were to be actively engaged in their district in commerce, agriculture, or some other industrial pursuit, and the three Class C directors were appointed by the Federal Reserve Board. The Federal Reserve Reform Act of 1977 amended the Federal Reserve Act to add the present antidiscrimination requirements and to expand the economic diversity provisions to agriculture, commerce, industry, services, labor, and consumer representation for Class B and C directors. According to the legislative history of the Reform Act, these changes were made to help broaden Reserve Bank board representation to include women and minorities, as well as industries and other interest groups. The Federal Reserve Board maintains a database of current and past directors that is used to track demographic information voluntarily provided by directors. Information in this database is entered by the individual Reserve Banks and managed by the Federal Reserve Board. We analyzed demographic characteristics of bank (head office) and branch directors who served at some time during 2006 through 2010 to present a profile of director demographic characteristics. Figure 4 shows the representation of head office directors from 2006 through 2010 using Federal Reserve Board data. Over the 5-year period, we found that generally the representation of women and minority head office directors has remained limited. For example, in 2006, minorities accounted for 13 of 108 director positions; and in 2010 they accounted for 15 of 108 director positions. More specifically, in 2010, head office directors comprised 78 white men, 15 white women, 12 minority men, and 3 minority women. We also analyzed the total number of female and minority directors serving from 2006 through 2010 by class. As shown in figure 4, Class B and Class C directors were more diverse in gender, race, and ethnicity than Class A directors. For example, of the 202 directors serving from 2006 through 2010, 7 Class A directors were female, while there were approximately twice that number of female Class B and C directors, respectively—16 Class B and 16 Class C female directors. Furthermore, there were 3 minority Class A directors, while there were 14 minority Class B and 9 minority Class C directors. Several Reserve Bank officials we spoke with told us that Class B and Class C directors are a source of both economic and demographic diversity on Reserve Bank boards. Figure 5 shows the representation of branch directors from 2006 through 2010. Over the 5-year period we also found that generally, the representation of women and minority branch directors has also remained limited. For example, in 2006, minorities accounted for 40 of 182 director positions; and in 2010, they accounted for 30 of the 164 positions. More specifically, in 2010, branch directors comprised 97 white men, 37 white women, 22 minority men, and 8 minority women. The data show that labor and consumer groups are less represented than other industry groups on both head office and branch boards. As shown in figure 4, from 2006 through 2010, 5 of the 202 head office directors served as consumer representatives and 6 of the 202 head office directors served as labor representatives. As shown in figure 5, from 2006 through 2010, 11 of the 309 branch directors served as consumer representatives and 4 of the 309 branch directors served as labor representatives. The Federal Reserve Board has encouraged Reserve Banks to recruit directors from consumer and labor organizations. For example, in a February 2010 memo to Reserve Bank presidents on director recruitment, the Federal Reserve Board listed recruiting leaders from these two industry groups as a “high priority.” Despite these efforts, two Reserve Bank officials we spoke with said recruiting consumer and labor representatives is a challenge because many of them are politically active and the Federal Reserve Board policy, which restricts a director’s political activity, would generally require them to give up such activities while serving on the board. As shown in figure 6, Federal Reserve Board data show that generally, representation of minority and female directors varied somewhat across districts. For example, of the 16 head office directors serving from 2006 through 2010 at both Federal Reserve Bank of Dallas and Federal Reserve Bank of Kansas City, 2 were women; and of the 18 head office directors serving from 2006 through 2010 at Federal Reserve Bank of Boston, 5 were women. One Reserve Bank corporate secretary we spoke with said that it was difficult to recruit diverse candidates within his district because of a lack of overall diversity in the region. To obtain information from all current directors on a cross section of high- level issues, we conducted a web-based survey of the 105 Reserve Bank directors that served for the full year during 2010. We collected and summarized additional demographic information for 2010 directors, such as their prior work experience, education, and other board positions. Many Reserve Bank directors responding to the survey typically had experience in the finance industry and almost all currently serve on a variety of other boards. At least 56 have had some financial industry experience. After the financial industry, the next most reported work experiences by industry were manufacturing; professional, scientific, and technical services; retail trade; and real estate and rental leasing. The vast majority of the directors who responded to the survey reported that they had completed a bachelor’s degree. More specifically, over half of the directors responding to the survey (55) reported that they had completed some type of advanced degree, such as a master’s, juris doctor, or doctorate. In 2010, 86 of 91 Reserve Bank directors responding to our survey served on a variety of nonprofit, private, and public company boards. For example, directors held board positions at public and private universities; for-profit companies such as Loews Corporation, Safeway, Inc., and Energizer Holdings; and nonprofit organizations such as the Ford Foundation and Ronald McDonald House Charities. We analyzed EEOC’s EEO-1 data for employers with 100 or more employees from 2007 through 2009. The EEO-1 data provide information on racial/ethnic and gender representation for various occupations within a broad range of industries. We used the EEO-1 “executive and senior level officials and managers” job category as the basis for our analysis because this is the category of employees from which Reserve Banks would most likely recruit directors. EEOC defines the job category of executive and senior level officials and managers as individuals residing in the highest levels of organizations who plan, direct, and formulate policies, and provide overall direction for the development and delivery of products and services. Figure 7 provides EEO-1 data for individual minority groups and illustrates their trend in representation at the management level, which varied by group. As shown in figure 7, among all EEO-1 reporters, senior management representation by whites and Asians increased from 2007 through 2009. For example, whites accounted for 87.4 percent of all industry senior management positions in 2007; and they accounted for 88.6 percent of senior managers in 2009. Moreover, while representation by Asians also increased during this period, African Americans and Hispanics in senior management decreased steadily. For example, Hispanics accounted for 4.5 percent of all industry senior management positions in 2007; and they accounted for 3.5 percent in 2009. Representation for “Other” races remained constant from 2007 through 2009. Figure 7 also compares race and ethnicity and gender between the EEO- 1 and Federal Reserve Board datasets. EEO-1 data show that the pool of senior managers—a possible pipeline for potential Federal Reserve directors—has limited diversity. For example, minorities accounted for 12.6 percent of all senior management positions in 2007 and 11.4 percent in 2009. Similarly, minorities accounted for 12.4 percent of Federal Reserve directors in 2009. As shown in figure 8, diversity was limited among senior-level management in the commercial banking industry. Because Class A directors are nominated and elected by the member banks in each Federal Reserve district to represent the stockholding banks, they are generally officers or directors of a member commercial bank. EEO-1 data show that, on average, the pool of senior managers in the commercial banking industry, a source that provides the pool of candidates for Class A directors, is less diverse than senior management in terms of race and ethnicity in all other industries combined. In 2009, the percentage of senior management positions held by minorities ranged from an average of 9.6 percent for commercial banking institutions to an average of 11.5 percent for all other industries combined. However, the average percentage of positions held by women was relatively consistent between commercial banking institutions and all other industries, 29.0 percent and 28.3 percent, respectively. Reserve Bank officials said they generally focus their search on senior executives. To explore whether Reserve Banks expanding their search to include nonexecutives would increase diversity, we spoke with officials and directors about their views on this matter. Several Reserve Bank executives and directors told us that having senior executives on the board of directors helps elevate the stature of the board. In addition, they said that individuals working at the top of their organization may have a broader view of how their industry is being affected by the economy. On the other hand, one Reserve Bank official told us that he felt looking below the executive level for potential directors was important. Further, at one Reserve Bank, the corporate secretary told us the bank actively looks for directors who may not be senior-level executives in an attempt to increase diversity. At another Reserve Bank, the corporate secretary stated that the bank has had nonexecutives serve on the board both currently and in the past. In previous work on diversity in the financial services industry, we found that individuals holding positions one level below senior management were more diverse than senior management. According to this work, EEOC data showed that generally, management-level representation by minority women and men increased from 11.1 percent to 17.4 percent from 1993 through 2008. However, these EEOC data overstated minority representation at senior management levels, because the category included midlevel management positions, such as assistant branch manager, that may have greater minority representation. In 2008, EEOC reported revised data for senior-level positions only, which showed that minorities held 10 percent of such positions compared with 17.4 percent of all management positions. This suggests that by broadening its pool of potential candidates below the executive level, Reserve Banks may be able to attract more diverse director candidates with potentially more diverse backgrounds and perspectives on the economy. We also analyzed EEO-1 data by Federal Reserve district to determine district-level trends in senior management across all industries. This analysis demonstrates that diversity of senior managers in the Federal Reserve districts varies. As shown in figure 9, certain Federal Reserve districts’ territories are somewhat more diverse than others at the senior management level. For example, in 2009, the percentage of senior management positions held by minorities ranged from a high of 18.7 percent within the Federal Reserve Bank of San Francisco’s territory to a low of 4.0 percent within the Federal Reserve Bank of Minneapolis territory, indicating that diversity among senior managers does vary by district. Reserve Banks select candidates to fill director vacancies based upon criteria in the Federal Reserve Act and guidance from the Federal Reserve Board. The act provides requirements for the nomination and election of directors. The act requires that member banks of each district be classified into three groups consisting of banks of similar capitalization—small, medium, and large. The member banks in each group nominate and elect one Class A director to represent that group’s banks and one Class B director to represent the public. After the candidates are identified and a list of their names is forwarded to the member banks, each bank may cast one vote for a Class A director and one vote for a Class B director. Class C directors, who also represent the public, are recommended by the Reserve Banks and appointed by the Federal Reserve Board. The Federal Reserve Board also provides guidance on director election and eligibility requirements in the Federal Reserve Administrative Manual (FRAM). Additionally, the act specifies that all directors shall be chosen without discrimination as to race, creed, color, sex, or national origin and that Class B and Class C directors who represent the public shall be elected “with due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumers.” Each year the Federal Reserve Board provides a memorandum to Reserve Banks with priority objectives for the recruitment of individuals with independent and diverse views and potential sources from which to obtain diverse directors. In addition, it distributes a yearly report on the demographic and industry characteristics of directors to each of the Reserve Banks for their use as they seek to identify and consider potential candidates. Reserve Banks review the current demographics and areas of expertise of their boards when selecting candidates to fill director vacancies. At each Reserve Bank, the corporate secretary works collaboratively with the president and other senior bank staff to assess the demographics of their board and identify areas where additional representation may be needed. Several Reserve Bank officials with whom we spoke told us they also consider geography and educational background as selection criteria, in addition to those outlined in the act. Three Reserve Bank officials told us that while they strive to find diverse candidates from a variety of industries, they also want to find people who have the skills and knowledge that will fill a gap in the board’s existing knowledge and skill set. Additionally, Reserve Bank officials said they generally focus their search on senior executives, usually chief executive officers (CEO) or presidents. For example, of the 108 directors serving in 2010, 82 were the president or CEO of their company. Further, we identified at least 23 who were employed by Fortune 500 companies in 2010. Three Reserve Bank officials we spoke with indicated that CEOs generally have a better familiarity with the economic and business community of their district than less senior managers. However, as discussed previously, while having executives on the boards may elevate the stature of the board, it may limit the diversity of the pool of potential candidates. Reserve Banks identify potential director candidates in a variety of ways and often use different recruitment methods. In general, Reserve Banks use a combination of personal networking and community outreach efforts to identify potential candidates. Two directors with whom we spoke told us they have recommended personal or business acquaintances they believe would be qualified to serve as directors. In addition, some Reserve Banks contact former directors for help in identifying possible candidates. Several Reserve Bank presidents and senior staff also attend community roundtables and forums to network and identify potential candidates. Several Reserve Banks use their advisory councils and branch boards as a source for potential candidates. One Reserve Bank official told us that they look for candidates in a variety of industry lists such as a Forbes’ magazine list of the most powerful women in business. At another Reserve Bank, member banks of the states represented in the district have agreed to a rotating nomination process for Class A and Class B directors to help ensure geographic representation. That is, when it is one particular state’s turn to nominate a candidate, the state’s Banking Association identifies potential candidates. At least one Class C director said he self-identified for the position and approached the Reserve Bank to express his interest in serving on the board when a vacancy came up. Some Reserve Banks also use nominating committees to identify qualified director candidates. These committees may do so by recruiting candidates to fill vacant seats on the board, reviewing candidates recommended by the Reserve Banks and others, or conducting inquiries into the backgrounds and qualifications of potential candidates. Five Reserve Banks use nominating committees to identify potential candidates. For example, one Reserve Bank has a nominating committee that considers candidates for the Federal Reserve Board-appointed Class C directors. At another Reserve Bank, the nominating committee currently consists of three Class C directors and two Class A directors that meet to consider and make recommendations concerning board membership for all classes of directors. Guidelines in the FRAM require that nominating committees recommending Class A and Class B director candidates not include Reserve Bank officers and employees. Typically, a Reserve Bank identifies and vets potential candidates for Class A and B directors, and communicates their names and credentials to member banks for their nomination and election. Reserve Banks generally submit an open call for nominations to the district’s voting banks, even if they also have a nominating committee. Typically, the member banks will elect the Class A and B candidates identified and vetted by the Reserve Bank’s nominating committee. However, member banks can nominate and elect a candidate that has not been vetted by the Reserve Bank. In such cases, the bank will inform the nominee of a director’s eligibility requirements, to determine if the candidate is eligible to serve, if elected. We found that member bank voter turnout was often low at some Reserve Banks. Although the Federal Reserve Act sets forth specific procedures and voting requirements for director elections, shareholder elections of Reserve Bank directors do not have a requirement for a minimum number of votes. The Federal Reserve Board requires every Reserve Bank to provide a slate of at least two candidates for each Class C vacancy to the Federal Reserve Board for appointment. Typically, the Federal Reserve Board will appoint a candidate from the slate provided by the Reserve Banks to serve as a Class C director. However, the Federal Reserve Board may ask for further explanation of why Reserve Banks selected certain candidates or ask for alternative candidates. Several Reserve Banks indicated that recruiting directors for several groups—specifically women, minority, and labor or consumer representatives—can be challenging. According to Reserve Bank officials, recruiting labor and consumer representatives is particularly difficult because many of them are politically active and the Federal Reserve Board policy generally restricts a director’s political activity. They also noted that Reserve Bank directors’ roles and responsibilities can be time consuming and that compensation is low compared with that available in other opportunities to serve on private boards. After the passage of the Sarbanes-Oxley Act in 2002, directors are limited in the number of public company boards on which they can serve; therefore, Reserve Banks compete with other private corporations for these directors’ time, especially women and minorities. In addition, some individuals do not want to divest of their stock holdings in the banking- related industry (which would be required for Class C) and also may not wish to refrain from political participation, according to Federal Reserve System officials. As we have previously reported, many private and public organizations have recognized the importance of recruiting and retaining minority and women candidates for key positions as the U.S. workforce has become increasingly diverse. Some Reserve Bank officials told us that many organizations are searching for diverse directors to have on their boards, and the Reserve Banks are competing with private corporations for the same small pool of qualified individuals. Although the policies of private corporate boards we reviewed do not have specific requirements for board diversity, the Securities and Exchange Commission (SEC) recently started requiring companies to identify steps taken to ensure diversity of their boards in their proxy statement to shareholders. In our review of the proxy statements from the 10 largest bank holding companies in 2010, we found that companies generally did not list specific steps taken to identify and select diverse board members (see app. IV for a list of the 10 largest bank holding companies included in our review). Rather they provided a broad statement about diversity. For example, one company stated, “The Committee evaluates diversity in a broad sense, recognizing the benefits of racial and gender diversity, but also considering the breadth of backgrounds, skills, and experiences that directors and candidates may bring to our Board.” Having a demographically and economically diverse board strengthens an organization by bringing a wider variety of perspectives and approaches to the organization. While officials at some Reserve Banks told us they consider candidates who are not chief-level executives (i.e., not chief financial officers, chief operating officers, or executive vice presidents), the vast majority of directors in 2010 held such positions in their organizations. By broadening their pool of candidates, Reserve Banks may be able to improve diversity, and ultimately public representation, on the Reserve Bank boards. Such diversification can help ensure that the Federal Reserve System receives a broader spectrum of information useful for the formation and execution of monetary policy and the oversight of Reserve Bank operations. From the creation of the Federal Reserve System, the Federal Reserve Act has required the Reserve Banks to include Class A directors on their boards to be representative of the member banks, as each of the Reserve Banks is owned by the member banks in its district. While Class A directors are not required to be officers or employees of member banks, in practice, most Class A directors are officers or directors of member banks in the district. The requirement to have representatives of member banks creates an appearance of a conflict of interest because, as noted previously, the Federal Reserve System has supervisory authority over state-chartered member banks and bank holding companies. Conflicts of interest involving directors have been historically addressed through both federal law and Federal Reserve System policies and procedures, such as by defining roles and responsibilities and implementing codes of conduct to identify, manage, and mitigate potential conflicts. Nevertheless, directors’ affiliations with financial firms and former directors’ business relationships with Reserve Banks continue to pose reputational risks to the Federal Reserve System. When the Federal Reserve System played a key role in providing assistance to financial institutions during the 2007-2009 financial crisis, Reserve Bank board governance came under scrutiny because, among other things, a number of director-affiliated banks and nonbank financial institutions participated in the Federal Reserve System’s emergency programs. Since then, Congress, the Federal Reserve Board, and Reserve Banks have made a number of changes to the policies and procedures that address Reserve Bank governance. However, without more complete documentation of the directors’ roles and responsibilities with regard to the supervision and regulation functions, as well as increased public disclosure on governance practices to enhance accountability and transparency, questions about Reserve Bank governance will remain. The three classes of Reserve Bank directors have varying degrees of involvement in the financial services industry, and their affiliations with financial companies could create reputational risk for the Reserve Banks. In addition, relationships between current and former directors and interactions between former directors and the Reserve Banks could also raise questions about the independence of the directors and actions of the Reserve Banks. Finally, questions about directors’ involvement in the emergency programs authorized by the Federal Reserve Board during the financial crisis spurred allegations of conflicts of interest. As we reported in our July 2010 report on the emergency programs, our review found that the boards of directors generally were not directly involved in the development and implementation of emergency programs. Federal Reserve Bank directors often serve on the boards of a variety of financial firms as well as those of nonprofit, private, and public companies. For example, in 2010, 86 of 91 Reserve Bank directors responding to our survey held board positions at public and private companies, public and private universities, and nonprofit organizations. As noted earlier, our survey indicated that most of the Reserve Banks have directors who have held positions at financial services firms or insurance companies as well as banks. This includes Class A directors who are officials of banks that hold stock in the Reserve Bank, and Class C directors, who are required by the Federal Reserve Act to be persons of tested banking experience, which the Federal Reserve Board says has come to be interpreted as requiring familiarity with banking or financial services. In addition, as the financial services industry has evolved, more companies are involved in financial services or otherwise interconnected with financial institutions. These changes have resulted in a few Class B and Class C directors who were previously employed by financial institutions or have served on their boards. A recent example that raised questions about affiliations, and the nature of director affiliations with financial firms, involved the then-FRBNY chairman in late 2008, who was former chairman and a current board member and shareholder of the Goldman Sachs Group, Inc. (Goldman Sachs). As illustrated in figure 10, when the then-FRBNY chairman joined the FRBNY board as a Class C director in January 2008, Goldman Sachs was an investment bank outside the supervisory authority of the Federal Reserve System. However, in September 2008, in response to the unfolding financial crisis, Goldman Sachs applied for and was approved by the Federal Reserve Board to become a bank holding company. As a result, under Federal Reserve Board policy, the then-FRBNY chairman became ineligible to serve as a Class C director because he was then a director and stockholder of a bank holding company. Without consultation with the full FRBNY board, FRBNY sought a waiver to allow the then-FRBNY chairman to continue to serve on the board. According to an FRBNY official, FRBNY sought the waiver in October 2008 for a number of reasons. First, finding a new chairman during the financial crisis would have been difficult, given that FRBNY already had one director vacancy on its board at the time. Further, the event leading to the need for a waiver was unexpected and unforeseen. In late November 2008, an additional concern was raised that the then-FRBNY president was expected to be nominated as the Secretary of the Treasury, thereby raising the potential that FRBNY would be searching for both a new president and a new chairman simultaneously, with the added complication that, as the chair of the FRBNY board, the then-FRBNY chairman would be heading the search committee for a new president. The waiver was granted by the Federal Reserve Board in January 2009 on the basis of these considerations. However, the Federal Reserve Board was unaware that the then-FRBNY chairman had purchased additional shares in Goldman Sachs via an automatic stock purchase program. The then-FRBNY chair resigned in May 2009. As discussed later, on the basis of this waiver experience, the Federal Reserve Board decided to develop and institute a formal policy governing the treatment of situations in which Class B or C directors’ stockholdings unexpectedly become impermissible. This policy has since been adopted. FRBNY also changed its policy, which would require that waivers be discussed by the board of directors before going to the Federal Reserve Board. Federal Reserve Board officials told us that after receiving the waiver request from FRBNY, they contacted other Reserve Bank boards to determine whether any other directors held stocks in companies that had recently converted to bank holding companies. According to these officials, this review identified a director from the Federal Reserve Bank of Minneapolis who held less than $100,000 in stock in Merrill Lynch & Co., Inc., an investment bank, which had been acquired by Bank of America, a bank holding company. This director remained on the board and a waiver was granted by the Federal Reserve Board, but nonetheless, he subsequently divested the shares in January 2009. Another situation that raised questions about affiliations involved a FRBNY Class B director. The director was the Chief Executive Officer of Lehman Brothers Holdings, Inc. (Lehman), an investment bank that experienced significant financial problems during the unfolding financial crisis and ultimately failed. An FRBNY official said that he met with the FRBNY president and chairman about Lehman’s deteriorating financial condition, without the full board, and concluded that FRBNY faced reputational risk regardless of the action taken. Specifically, it was concluded that although the board of directors was not involved in approving and implementing the emergency programs, a recusal from board meetings by the Lehman director might not have managed the appearance of a conflict and a public resignation might have sent a negative signal to the market and hastened the collapse of the firm. Under Federal Reserve Board practice, Reserve Bank directors affiliated with troubled financial institutions are encouraged to resign or risk removal from the board. Federal Reserve System officials said that the director voluntarily resigned before Lehman filed for bankruptcy. Although directors’ affiliations with financial firms do not necessarily create conflicts of interest, they may complicate the directors’ relationships with the Reserve Banks and increase public scrutiny of them. One issue relates to directors’ communications with Reserve Bank officials in their roles as senior executives of their companies. These situations have raised questions as to whether directors have greater access to Reserve Bank officials than other financial institution officials and whether they have influence over matters that may affect banks or institutions with which they are affiliated. Reserve Bank officials with whom we spoke said that there are no restrictions on directors communicating with Reserve Bank staff about their respective banks or holding companies in their capacity as officials of the bank nor are there restrictions on conversations about the financial markets. However, according to Federal Reserve Board officials, members of the Reserve Bank board of directors are not granted special access to supervisory staff, and it has been the practice of the Federal Reserve Board and the Reserve Banks to restrict their involvement in supervision issues. Further, Reserve Bank officials said that requests from other financial institutions to meet with Reserve Bank staff are processed in the same manner as those from the directors. As discussed later, the financial crisis highlighted situations where directors were in contact with Reserve Bank staff in their capacity as representatives of their financial institutions and market participants. After completing their terms, directors who had represented member banks or who have affiliations with other financial institutions may maintain contact with Federal Reserve Bank officials for various reasons. FRBNY officials said that actions such as the Reserve Banks’ management of such communications may help safeguard against improprieties. For example, during the 2008 financial crisis, the company of a former FRBNY director was negotiating with FRBNY regarding assets the Reserve Bank had acquired when it extended credit against the assets of Bear Stearns Companies, Inc. The former director felt that there was a miscommunication and contacted a number of FRBNY staff he knew to discuss the issue. The director’s preexisting relationship with FRBNY raised questions about the appropriateness of FRBNY’s actions in its negotiations with the former director’s firm. Recently, FRBNY implemented a procedure to document contacts involving directors by reporting calls and their content in a memo to the chairman of the board’s Audit and Operational Risk Committee. Reserve Bank officials said that many of the Reserve Banks maintain programs to keep in touch with former directors. These can be formal programs such as annual holiday functions or informal ways to continue to seek former directors’ views on the economy and their industries. Reserve Bank officials described these contacts as “unobjectionable.” A former Federal Reserve Board governor with whom we spoke also thought that these contacts are appropriate. As discussed later, indirect connections between directors’ firms and Reserve Banks when the firms used the emergency programs or acted as service providers have also raised questions. The Federal Reserve Board, and in some cases, the FOMC, authorized the creation and modification of most of the emergency programs under authorities granted by the Federal Reserve Act. Although a number of Reserve Bank directors were affiliated with institutions that borrowed from the emergency programs, we did not find evidence that Reserve Bank boards of directors participated directly in making any decisions about authorizing, setting the terms of, or approving a borrower’s participation in the emergency programs. Our review did not reveal that Reserve Bank directors received nonpublic information on the emergency programs. A review of minutes from the 12 Reserve Bank board meetings during the unfolding crisis revealed that discussions of emergency programs during board meetings appeared to have occurred after the emergency programs had been publicly announced. Further, presentations by Reserve Bank staff generally covered explanations of the related emergency lending authority, administration of the program, descriptive information about the programs’ operations and risks, and the impact on the Reserve Banks’ balance sheets. Moreover, Federal Reserve Board officials and Reserve Bank directors from all 12 Reserve Banks with whom we spoke told us that the Reserve Bank boards did not play a role in the creation or implementation of the emergency programs. Federal Reserve Board officials also pointed out that all Reserve Bank directors are prohibited from disclosing nonpublic information related to the programs and such disclosures may risk violating insider trading laws. While all Reserve Banks implemented the Term Auction Facility, FRBNY implemented the majority of the emergency programs. A number of FRBNY’s directors played a limited oversight role as prescribed in a written Audit and Operational Risk Committee (AORC) protocol that states the oversight by the directors was focused on operational risks. For example, according to FRBNY officials, FRBNY staff periodically briefed the committee on the composition of an asset portfolio that was created to assist Bear Stearns when it was near failure to help ensure that the directors were aware of how the bank was managing certain high-risk assets. FRBNY has five directors on the audit committee. During the financial crisis, at least one Class A director served on this committee at any given time. According to FRBNY officials, to help ensure that one class of directors does not have undue influence, FRBNY strengthened its governance structure by revising its AORC charter to permit only two out of five committee members to be Class A directors. Although implemented after the unwinding of many of the emergency programs, the enhanced standards helped mitigate the appearance of actual and potential director conflicts by ensuring that Class A directors are not the majority on the AORC. Appendix III provides more information on the Reserve Bank committees. As mentioned earlier, in their role as market participants, some FRBNY directors were consulted by FRBNY management and staff as certain emergency facilities were being created. According to FRBNY officials, a director providing information to FRBNY management and staff in his or her role as chief executive officer of an institution does not equate to “participating personally and substantially”—as defined by 18 U.S.C. § 208, discussed below—because the director is not playing a direct role with respect to approving a program or providing a recommendation. According to FRBNY officials, FRBNY’s Capital Markets Group contacted representatives from primary dealers, commercial paper issuers, and other institutions to gain a sense of how to design and calibrate some of the emergency programs. For example, FRBNY officials said that General Electric Company (General Electric), whose chief executive officer was serving as a Class B director at the time, was one of the largest issuers of commercial paper and General Electric was one of the companies FRBNY consulted when creating the emergency program to assist with the commercial paper market. FRBNY officials said they contacted institutions for this purpose irrespective of whether one of FRBNY’s directors was affiliated with the institution. Some of the institutions that borrowed from the emergency programs had senior executives and stockholders that served on Reserve Banks’ board of directors. These relationships contributed to questions about Reserve Bank governance and also raised concerns about conflicts of interest. We identified at least 18 former and current Class A, B, and C directors from 9 Reserve Banks who were affiliated with institutions that used at least one emergency program. In those cases, 11 Class A directors who served between 2008 and 2010 worked for member banks that used an emergency program. There are 2 Class B directors who served between 2008 and 2010 and worked for companies that used an emergency program. Similarly, one Class C director who served between 2008 and 2009 was affiliated with a company that used at least one program. In addition, there are 4 former Class A directors who served between 2006 and 2007 whose companies used the emergency facilities. The Term Auction Facility was the most commonly used facility. According to Federal Reserve Board officials, the Federal Reserve Board allowed borrowers to access its emergency programs only if they satisfied publicly announced eligibility criteria. Thus, Reserve Banks granted access to borrowing institutions affiliated with Reserve Bank directors only if these institutions satisfied the proper criteria, regardless of potential director-affiliated outreach or whether the institution was affiliated with a director. As we reported in our July report, our analysis did not find evidence indicating a systemic bias toward favoring one or more eligible institutions. While some institutions that borrowed from these programs were affiliated with a Reserve Bank director, these institutions were subject to the same terms and conditions as those that had no such affiliation. As another example, the Chief Executive Officer of JP Morgan Chase & Co. (JP Morgan Chase) served on the FRBNY board of directors at the same time that his bank participated in various emergency programs and served as one of the clearing banks for emergency lending programs. According to Federal Reserve Board officials, there are only two entities, including JP Morgan Chase, that offer services as clearing banks for triparty repurchase agreements and both banks served as clearing banks for the emergency programs. Similarly, Lehman’s Chief Executive Officer served on the FRBNY board while Lehman’s broker-dealer subsidiary participated in emergency programs such as the Primary Dealer Credit Facility. Having the Class A directors, who represent member banks, and the Class B directors, who are elected by member banks, as required by the Federal Reserve Act, creates an appearance of a conflict of interest. This is because Class A or B directors might own stock in banks or Class A directors might work for banks that are supervised by the Reserve Bank while also overseeing aspects of the Reserve Banks’ operations, including the bank presidents’ evaluation and salary and personnel decisions for the supervision and regulation function. In addition, Class B directors are involved in the president selection process. In turn, the president oversees the supervision and regulation function, which regulates the member banks that vote for the Class A and B directors. The president also may serve on the FOMC. Conflicts of interest involving directors have been historically addressed through both federal law and Federal Reserve System policies and procedures. First, individuals serving on the boards of directors of the Reserve Banks are generally subject to the federal criminal conflict-of- interest restrictions in section 208 of title 18 of the U.S. Code and its implementing regulations. 18 U.S.C. § 208 generally prohibits Reserve Bank directors from participating personally and substantially in their official capacities in any matter in which, to their knowledge, they have a financial interest, if the particular matter will have a direct and predictable effect on that interest. The Office of Government Ethics regulations implementing 18 U.S.C. § 208 include provisions concerning divestiture, disqualification (recusal), and waivers or exemptions from disqualification. The regulations also provide that Reserve Bank directors may participate in specified matters, even though they may be particular matters in which they have a disqualifying financial interest. These matters concern the establishment of rates to be charged to member banks for all advances and discounts; consideration of monetary policy matters and other matters of broad applicability; and approval or ratification of extensions of credit, advances or discounts to healthy depository institutions, or in certain conditions, to depository institutions in hazardous condition. As the rulemaking for these exemptions notes, because of their ties to the financial services industry and their communities, it is likely that at least some directors will have financial conflicts with their duties, and the exemptions adopted by the Office of Government Ethics were necessary to resolve any possible conflict between the directors’ statutorily mandated function and the performance of their official duties. The Federal Reserve Board and Reserve Banks have policies and procedures to identify, manage, and mitigate conflicts of interest that could result from a Reserve Bank director having financial or other interests that conflict with the interests of the Reserve Bank. These steps include defining the roles and responsibilities of directors to avoid conflicts, managing and mitigating conflicts of interest through adherence to federal law and the Federal Reserve board’s conflict-of-interest policies, and establishing internal controls and policies to identify and manage potential conflicts. The Federal Reserve Board, within the requirements of the Federal Reserve Act, defines Reserve Bank directors’ overall roles and responsibilities. In doing so, it manages and mitigates conflicts with respect to directors’ involvement with bank supervision and regulation by precluding director involvement in institution-specific supervision matters and establishes restrictions on directors’ interaction with the Reserve Banks’ supervision and regulation function. Additionally the Federal Reserve Board monitors the performance of the Supervision and Regulation Department of the Reserve Banks. Actual or potential conflicts of interest could arise if directors were consulted about supervisory matters because of their stock ownership or affiliation with the supervised entity or with a competitor or customer of the supervised entity. Our analysis of the board minutes, interviews, and survey of Reserve Bank directors reveals that interaction between the directors and the supervision and regulation staff was generally limited and that the directors were not involved in the day-to-day operations of supervision and regulation or specific bank supervisory matters such as bank examination ratings or potential enforcement actions. Reserve Bank officials and directors told us that when supervision and regulation staff report on the operations in board meetings, they do not provide details on examination issues or identify institutions by name. Our review of board minutes showed a few instances where supervision and regulation staff shared summary information concerning the general condition of banking institutions in the district. According to Federal Reserve Board and Reserve Bank officials, because the Federal Reserve Board has delegated the examination of bank and financial holding companies, member banks, and affiliates to the Reserve Bank staff, the staff report through the Reserve Bank presidents to the Federal Reserve Board and not directly to the boards of directors of the Reserve Bank. Further, although Supervision and Regulation generally reviews and approves member bank applications to purchase other banks or establish branch offices, among other things, applications that involve institutions affiliated with a Reserve Bank director are approved by the Federal Reserve Board. As an example, Goldman Sachs’s application to become a bank holding company in September 2008 was reviewed by the Federal Reserve Board because one of the company’s directors was also a director on the board of the Reserve Bank. Questions also have been raised about the role of the Reserve Bank board in approving the Reserve Banks’ discount window lending and whether conflicts of interest arise because officials from member banks that borrow from the discount window may serve as Class A directors on Reserve Bank boards. To avoid this potential conflict, no boards take part in loan approval, although some boards ratify loans that have already been granted under the discount window on a quarterly basis. Moreover, directors and Reserve Bank officials we spoke with said that Class A directors recuse themselves from the loan approval discussion when their institution has borrowed. As explained more fully earlier, Reserve Bank directors are subject to the federal criminal conflict of interest restrictions under 18 U.S.C. § 208, which generally prohibits Reserve Bank directors from participating personally and substantially in their official capacities in any matter in which, to their knowledge, they have a financial interest, if the particular matter will have a direct and predictable effect on that interest. In addition, Reserve Bank directors are expected to follow relevant policies in the FRAM developed by the Federal Reserve Board. As stated in the FRAM’s “Guide to Conduct for Directors of the Federal Reserve Banks and Branches,” directors are expected to be “above reproach” in their personal financial dealings and should never use information they obtain as directors for personal gain. The FRAM states that in carrying out their responsibilities, directors should avoid any action that might result in or create the appearance of (1) affecting adversely the confidence of the public in the integrity of the Federal Reserve System, (2) using their position as director for private gain, or (3) giving unwarranted preferential treatment to any organization or person. Moreover, it states that directors should strictly preserve the confidentiality of Reserve Bank and Federal Reserve System information, and should avoid making public statements that suggest the nature of any monetary policy action that has not been officially disclosed. Directors also are expected to adhere to high ethical standards of conduct. In addition, directors are also expected to comply fully with all applicable laws and regulations governing their actions as directors and in their conduct outside of the Federal Reserve System. The FRAM also prohibits directors from engaging in certain types of political activities. As a general principle, it states that directors should not engage in any political activity or serve in any public office where such activity or service might be interpreted as associating the Reserve Bank or the Federal Reserve System with any political party or partisan political activity, might embarrass the Reserve Bank or the Federal Reserve System in the conduct of its operations, or might raise any question as to the independence of the individual’s judgment or ability to perform his or her duties with the Reserve Bank or System. The Federal Reserve Board’s policy does not prohibit directors from participating in activities as individual voters or as members of nonpartisan public service bodies when that would not be potentially embarrassing to the Federal Reserve System. The Reserve Banks have internal controls, including annual certifications, oaths, and affirmations, to help the banks monitor directors’ compliance with the FRAM and conflict of interest policies and procedures. These mechanisms require directors to report new directorships or affiliations, and to reaffirm that they are free of conflicts of interest. While directors are not required to disclose their financial holdings, Reserve Banks provide updates to directors whenever there is a change to the list of prohibited investments and affiliations (based on institutions that become bank holding companies or other institutions supervised by the Federal Reserve System). Also, during the directors’ selection process, Reserve Bank officials conduct a background check using publicly available information on the directors and the financial status of the directors’ companies. Once directors are on the board, the Reserve Banks rely on the directors to self report any actual or potential conflicts of interest. Additionally, the directors receive training at the beginning of their terms, from both the Reserve Bank and the Federal Reserve Board. The Federal Reserve Board training includes meetings where the directors are able to meet the Board of Governors, Federal Reserve Board staff, and other directors from across the system. The training provided by the Reserve Banks includes information on the FRAM’s “Guide to Conduct for Directors of Federal Reserve Banks and Branches,” roles and responsibilities, ethics, oaths, affidavits, and certifications. Many directors also receive ethics training annually, in addition to the beginning of their terms. Reserve Banks provide training to directors to guide them in determining what investments/affiliations may be prohibited. The Federal Reserve Board also offers midterm training to all directors, which officials said is generally well attended. According to Federal Reserve Board and Reserve Bank officials with whom we spoke, the most likely potential conflict of interest involves procurement matters, and the Reserve Banks have taken a variety of steps to address them. Some Reserve Bank boards are involved in approving the bank’s vendor contracts. Because some directors are affiliated with businesses in the banks’ district that may offer services the Reserve Bank seeks, they could potentially have a conflict of interest if their firms or competitors were to compete for the contracts. To help ensure that procurement practices are untainted by actual or potential conflicts of interest from directors, the Federal Reserve Board requires all of the Reserve Banks to have procurement policies that provide guidance for directors that includes the role of directors in procurements, the nature of the procurement, an education program for directors, written procedures for the directors to follow for recusal, written certification process, and record keeping of training materials and attendance, recusals, and procurement certifications. Our review noted that all banks require the directors to sign certifications stating whether or not they have a conflict of interest with a procurement that is being considered. All Reserve Banks have processes and certifications to help ensure that directors do not have conflicts. Likewise, all Reserve Banks have delegated certain procurement decisions to management. We compared the Federal Reserve System’s ethics and related policies and practices with those of other organizations, including other central banks, a self-regulatory organization whose members serve on its board of directors, a government-sponsored enterprise, and large bank holding companies. See appendix IV for a list of the 10 largest bank holding companies included in our review. The authorizing laws, policies, and procedures for all four central banks we studied, like the authorizing law, policies, and procedures for the Federal Reserve System, included provisions relating to ethical behavior and conduct. All four central banks and the U.S. Reserve Banks emphasized that directors must demonstrate a high level of ethical conduct and adhere to applicable laws and regulations, but policies for managing conflicts of interest varied. For example, the Reserve Bank of Australia waives all conflict of interest requirements for its board, and allows directors to participate in policy deliberations as long as they disclose their interests to the bank annually. However, the Reserve Bank of Australia prohibits directors from working for or having a material financial interest in private financial companies in Australia. Conversely, the Bank of Canada Act requires that directors (1) disclose any material interest in writing or in the minutes of board meetings, (2) disclose the conflict as quickly as possible after the conflict is discovered or realized, and (3) not vote in any resolution or action related to the conflict. Directors must also avoid or withdraw from participation in any activity or situation that places them in a real, potential, or apparent conflict of interest. The Bank of Canada prohibits directors from having affiliations with entities that perform clearing and settlement functions in the financial services industry, serving as a dealer for government securities, or being government employees. Table 2 provides additional information on the ethics and conflict-of-interest practices of the central banks we reviewed. Reserve Banks’ ethics policies were generally consistent with those of FINRA and those required of the FHLBanks and public companies listed on the New York Stock Exchange (NYSE). FINRA prohibits directors who have a substantial financial interest or are affiliated with a regulated entity from participating in any regulatory matter, disciplinary action, investigation, or decision regarding an application from that entity for an exemption. The Federal Housing Finance Agency—FHLBanks’ regulator—requires that FHLBanks have a conflict of interest policy and that directors promptly disclose any actual or apparent conflicts of interest and recuse themselves from issues in which they have a conflict. Public companies listed on the NYSE—including the 10 largest bank holding companies included in our review—must adopt and disclose a code of business conduct and ethics. The code must contain a policy that prohibits conflicts of interest and allows directors to communicate potential conflicts with the company. Table 3 shows the ethics and conflict of interest practices of the comparable organizations we reviewed. Federal Reserve Banks do not require directors to periodically disclose their financial interests. Officials at the Federal Reserve Board stated that directors were doing a civic duty by serving on a Reserve Bank board and that the Federal Reserve Board does not want to make it burdensome for them to serve. The officials also noted that directors’ investments may change frequently, so keeping accurate information on all investments would be difficult. Class C directors submit an annual certification stating that they do not have any prohibited stockholdings. Although Federal Reserve Bank directors do not submit an annual disclosure of non- financial interests, both Class B and Class C directors are required to submit an annual certification stating that they do not have any prohibited affiliations. The directors are required to notify the corporate secretary if there are any changes in their affiliations or stockholdings, as appropriate. All four central banks we reviewed required directors to disclose some information about their personal affiliations with other organizations, such as other directorships. The Reserve Bank of Australia requires directors to disclose material personal financial interests—including financial and nonfinancial—to the treasurer on a yearly basis. The European Central Bank (ECB) requires all Governing Council members (i.e., Executive Board members and governors of the National Central Banks) to annually disclose their public and private affiliations, and Executive Board members must also complete a yearly financial disclosure. FINRA governors annually disclose their relationships with other organizations, such as other directorships, but do not typically provide financial information annually, according to FINRA officials. FHLBanks are required to file an annual report on Form 10-K with the Securities and Exchange Commission. This form includes information about the directors’ other directorships on the boards of publicly traded companies or investment companies. Most FHLBanks do not require directors to file a comprehensive annual financial disclosure, but most of the banks require directors to sign an annual certification agreeing to adhere to the ethics policies. All public companies—including the bank holding companies we reviewed—are also required by SEC to file a Form 10-K, which includes information about any other directorships of board members. Other comparable organizations had a variety of policies on waiving ethics and related requirements. Central banks in our review varied in the extent to which they had policies or procedures for directors to apply for waivers to their ethics policies. The Bank of Canada does not have a waiver process. An official at the bank stated that waivers would be inconsistent with the bank’s conflict of interest policy, which requires that directors avoid or withdraw from participation in any activity that places the director in a real, potential, or apparent conflict of interest. The European Central Bank Code of Conduct instructs Governing Council members to seek counsel from an ethics adviser if a conflict arises. The adviser either decides the issue or forwards it to the Governing Council. The NYSE requires that listed companies, including the large bank holding companies we reviewed, promptly disclose any waivers of codes of conduct for directors or executive directors. Only boards and board committees can grant waivers, which must be disclosed to shareholders within 4 business days, using either a press release, the institutional website, or an SEC Form 8-K. FINRA’s code of conduct for directors states the board must approve waivers from the code. However, FINRA officials told us that in practice, its governors have chosen to manage conflicts through recusal rather than seeking waivers. About half of the FHLBanks reported that they have a process in place for directors to request a waiver of the code of conduct. There are two types of waivers relevant to Reserve Bank directors. First, as discussed earlier, the Federal Reserve Board can grant waivers to directors in connection with 18 U.S.C. §208, pursuant to applicable federal regulations. Second, Reserve Banks may request waivers from the Federal Reserve Board’s policies related to director eligibility, qualifications and rotation, such as allowing directors to remain on the Reserve Bank board despite having a prohibited investment or other prohibited affiliation. Federal Reserve Board officials said they have received few waiver requests. According to the officials, the Federal Reserve Board waiver process permits Reserve Banks to make informal inquiries of Federal Reserve Board staff as to whether a given action would be appropriate. The officials noted that most of the time Reserve Banks’ questions could be resolved without an official waiver request. Additionally, Reserve Bank officials told us that they frequently receive questions from directors about the policies, which they either discuss and handle internally, or contact the ethics officer or corporate secretary at the Federal Reserve Board to determine the appropriate actions that should be taken. For example, one director checked with the general counsel at the Reserve Bank to discuss a situation in which family members had inherited bank stock that was held in a trust for which the director was named trustee. The general counsel discussed the issue with relevant officials at the Reserve Bank and advised the director to resign his position in the trust so that he would not have a conflict of interest. Not all Reserve Banks have procedures in place for directors to request a waiver of the eligibility policy from the Federal Reserve Board. We found that the Reserve Banks are not required to have a waiver request process and only FRBNY has a formal process in place to review waiver requests. An official from one Reserve Bank told us that the bank does not have a formal process for considering waiver requests nor has it had directors who needed to request a waiver from the Federal Reserve Board. When FRBNY sought the waiver on behalf of the then-FRBNY chairman from the Federal Reserve Board, FRBNY did not have a formal waiver process and did not consult with the board of directors before making a waiver request to the Federal Reserve Board. An FRBNY official told us that in hindsight the board should have been involved. On the basis of this experience, FRBNY implemented a formal waiver process. While we recognize that the need to request a waiver from Federal Reserve Board policies may be rare, a crisis situation may create unanticipated conflicts without providing time for comprehensive actions before a decision must be made. However, without a formal process in place to consider a request for a waiver from Federal Reserve Board policies, Reserve Banks risk inconsistent treatment of requests and being exposed to questions about their governance practice and the integrity of their decisions and actions. If waivers to policies are granted, making the process and decisions transparent is vital. Given the public nature of Reserve Bank activities, disclosing waivers provided to directors is one way to improve transparency and accountability and reduce the appearance of conflicts of interest. Public companies listed on the NYSE are required to promptly disclose any waivers of the code of conduct for directors and executive officers, which can be made only by the board or a committee of the board. To the extent that a waiver of the code of conduct is granted, the waiver must be disclosed to shareholders within 4 business days of the decision, by distributing a press release, providing website disclosure, or filing a report with SEC. Reserve Banks are not required to disclose information to the public about waivers of the policy on director eligibility and qualifications for one of their directors that were granted by the Federal Reserve Board. As demonstrated during the recent financial crisis and the waiver request for the then-FRBNY chairman, a lack of transparency around the waiver request process and outcome contributed to greater distrust of Reserve Bank governance. Congress and the Federal Reserve System have taken steps aimed at improving Reserve Bank governance. The Dodd-Frank Act, enacted on July 21, 2010, made several amendments to the Federal Reserve Act. One of these amendments changed the selection process for Reserve Bank presidents and first vice presidents. Before the amendment, all directors acted to appoint the president of the Reserve Bank, subject to the approval of the Federal Reserve Board. This created the appearance of a conflict because the Class A directors voted to appoint the Reserve Bank president, who would play a role in supervision and regulation and may be a voting member of the FOMC. After the amendment, only Class B directors (who are elected by district member banks to represent the public) and Class C directors (who are appointed by the Federal Reserve Board to represent the public) may appoint the Reserve Bank presidents. Class A directors, who are elected by member banks to represent member banks, may no longer appoint presidents of the Federal Reserve Banks. This same change also affects the appointment of the first vice president. In part because of the financial crisis that started in mid 2007 and the increased scrutiny of the Federal Reserve System, the Federal Reserve Board conducted a study of the governance of the Federal Reserve Banks, which included a review of the roles and responsibilities of the Reserve Bank directors. In November 2009, the results of this study were presented to the Reserve Bank presidents, corporate secretaries, and board chairmen, which led some banks to conduct reviews of the roles and responsibilities of their bank directors. As a result of the Federal Reserve Board review, the board revised two policies governing directors. First, the board amended the eligibility policy to explicitly address situations in which Class B or C directors’ stockholdings unexpectedly become impermissible, such as if a company in which a director holds stock converts to a bank holding company. Before this revision, the Federal Reserve Board did not have a formal policy governing the treatment of such situations. The revised policy requires directors to resign from the board or divest their interests within 60 days from the time the Reserve Bank or director learned about an impermissible situation. During this time, the director would have to recuse himself or herself from all duties related to service as a Reserve Bank director until the affiliation is severed. Second, the Federal Reserve Board revised its policy on director conduct by requiring Reserve Banks to adopt a policy that governs instances when directors are involved with procurement, as discussed previously. Since this Federal Reserve Board study and the Dodd-Frank Act amendments, all of the Reserve Banks have changed the directors’ roles to remove the Class A directors from the process of appointing the bank president. In addition, some banks have included additional restrictions on Class A directors’ involvement in supervision and regulation personnel and other matters. For example, the Federal Reserve Banks of New York, Richmond, and Minneapolis restricted Class A directors’ involvement in personnel appointments for supervision and regulation. Moreover, after the recent study, the board of the Federal Reserve Bank of St. Louis reevaluated its procedures so that the Class A directors are not involved with personnel matters related to the senior vice president of its supervision and regulation function, or any institution-specific matters. According to Federal Reserve System officials, it has been a standing practice, predating the enactment of the Dodd-Frank Act, that Reserve Bank directors do not vote on institution-specific supervisory matters. Beyond that practice, the Federal Reserve Banks of New York, Richmond, St. Louis, and Minneapolis recently revised their bylaws to include the role of their boards of directors with regard to supervision and regulation. FRBNY made clear that Class A directors are prohibited from voting on appointment, termination, and compensation of employees in the Financial Institutions Supervision group. Federal Reserve Bank of Richmond stated directors cannot vote on institution-specific supervision and regulation matters and that Class A directors should not vote on the budget for the supervision and regulation function and matters related to senior personnel in that function. Federal Reserve Bank of St. Louis states that actions by the board of directors related to oversight of the supervision and regulation function shall be upon a vote of a majority of the Class B and Class C directors present at any such meeting. Similarly, Federal Reserve Bank of Minneapolis stated that directors are not involved in institution-specific supervision and regulation matters and Class A and Class B directors should not vote on matters of an administrative nature. Although there are restrictions on directors’ involvement in supervision and regulation matters, the Reserve Banks are not required to document the directors’ roles in their bylaws. As a result, 8 of the 12 Reserve Banks have not documented the extent of board of directors’ involvement in supervision and regulation in their bylaws. The Federal Reserve Banks of Boston, Chicago, Cleveland, Dallas, Kansas City, Philadelphia, and San Francisco do not document the directors’ roles and responsibilities to further clarify the extent of their involvement in supervision and regulation matters. Although Reserve Bank directors may be cognizant of their roles and responsibilities, a lack of a clear statement in the bylaws on the directors’ involvement in supervision and regulation matters could contribute to lack of clarity around the directors’ roles, create confusion for the public, and lead to questions about Reserve Bank governance. Moreover, by documenting the roles of directors with regard to such matters, the Federal Reserve System could help enhance the public’s understanding of the roles of the directors and reduce the appearance of conflicts of interest. Some officials, directors, and academics with whom we spoke also suggested potential changes to the Reserve Bank board structure that could further strengthen governance, but these changes would involve tradeoffs. First, some suggested that increasing the number of directors appointed by the Federal Reserve Board who represent the public could help alleviate the appearance of member bank control. This could be accomplished by expanding the Reserve Bank board size by increasing the number of Class C directors or by adding a fourth class of 3 directors appointed by the Federal Reserve Board. By adding 3 more appointed directors to the Reserve Bank board, the boards would have an equal number of directors elected by member banks and directors appointed by the Federal Reserve Board, therefore eliminating the perception of member bank control of the boards. We have previously reported that board size is not one-size-fits-all and should be based on the needs and complexity of the organization. As discussed later in the report, board size for other public and private organizations varies, but a board size of 12 members would still be within the range of board sizes at other comparable organizations such as central banks, self-regulatory organizations, and large bank holding companies. A larger board could also enhance opportunities for diverse candidates. However, adding 3 board members would create more positions for Reserve Banks to fill, and it may be more difficult for some of the Reserve Banks to fill these positions. As of September 16, 2011, there were two director positions in the 12 Reserve Banks open. Additionally, some Reserve Bank officials and directors stated that a larger board size could reduce the opportunity for directors to participate in the meeting and may increase absences or decrease committee participation because directors could feel that their contributions were less important because there were more directors to accomplish the necessary board work. Additionally, an increase in the size of the Reserve Bank board would require an amendment to the Federal Reserve Act. Second, some Reserve Bank officials and directors suggested that the Federal Reserve Board could appoint Class B directors to represent the public rather than having them elected by member banks. The perception of conflicts of interest and member bank control could be reduced by making this change. However, we heard from several academics and Reserve Bank officials that the current system provides a set of checks and balances between the Federal Reserve Board in Washington, D.C., and the 12 Reserve Banks and their members. By allowing the Federal Reserve Board to appoint two-thirds of the Reserve Bank boards, the balance of power would shift to the Federal Reserve Board. Officials and directors we spoke with emphasized the importance of regional input in the Federal Reserve System, which includes the ability of the regions to select their representatives on the Reserve Bank board. Additionally, as discussed earlier, while the FRAM prohibits bank officials and employees from serving on nomination committees for Class A and B directors, Reserve Bank officials told us that they played a significant role in the identification, vetting, and recruiting of Class B directors before they are nominated and elected by member banks. Because Reserve Bank officials are involved in the identification and vetting process for both types of candidates, whether changing the selection process for Class B directors would change the outcome significantly is unclear. Additionally, allowing the Federal Reserve Board to appoint the Class B directors would require an amendment to the Federal Reserve Act. Congress, academics, and others have offered a number of ways to change the structure of the Federal Reserve System, which would have implications for governance and ongoing concerns about conflicts of interest. First, some academics and others have commented that the Reserve Banks should become offices or branches of the Federal Reserve Board rather than independent entities within the system, which would eliminate the boards of directors, or they said the boards of directors should be converted into advisory councils. One academic told us that making them branches would help address concerns about the current governance structure because it would eliminate the need for boards of directors and thereby eliminate conflicts. As an example, the central bank of Germany follows this model. However, others we interviewed noted that this would concentrate all of the power and influence with the Federal Reserve Board. Moreover, it would increase the size of the federal agency. In addition, others have said that converting boards to advisory councils in the districts would undermine governance by reducing the responsibility of the boards and would make it harder to attract quality candidates to serve on the councils. Second, some have questioned the need for 12 Reserve Banks given changes in the financial markets and advances in technology. The views of Federal Reserve System officials varied. A few of the individuals we interviewed thought there could be fewer banks because the current structure was outdated and reflected a U.S. economy that existed 100 years ago. However, others believe that the structure is still appropriate given differences in regional economies and perspectives. Federal Reserve System officials also point to the greater efficiencies that have been implemented through the Reserve Banks’ consolidation of certain ongoing operations such as check clearing and information technology. Third, some in Congress and others recommended that the Federal Reserve System’s role in supervision and regulation be eliminated, which would have eliminated concerns about conflicts of interest involving directors affiliated with institutions supervised by the Reserve Banks. Some believe that the central bank should be focused exclusively on monetary policy and that supervision and regulation should be conducted by another regulatory entity. Others viewed the two functions as critically intertwined, and ultimately, this approach to reform was not pursued by Congress in the Dodd-Frank Act. Rather, the Federal Reserve System’s supervisory role was increased to include thrift holding companies and systemically important financial institutions. Others have taken a less sweeping approach to reform by questioning the Federal Reserve Board’s delegation of supervision to the Reserve Banks. However, consolidating supervision with the Federal Reserve Board would require a substantial increase in the federal workforce for the Federal Reserve Board to conduct this function. Currently, the supervision and regulation staff at the Reserve Banks are not federal employees because they are employees of the Reserve Banks and not the Federal Reserve Board. Rather, the Reserve Bank supervision and regulation staff act under authority delegated from and are overseen by the Federal Reserve Board. With the exception of the delegation of authority, these other structural changes involve policy decisions that would require changes to the Federal Reserve Act. Reserve Bank boards are generally similar in size, composition, and term lengths and limits to the boards of comparable organizations. Additionally, they employ similar accountability measures, such as annual performance reviews of the organization and management, as other comparable organizations. However, Reserve Banks lack transparency in their governance practices compared with those of other organizations we reviewed. For example, while most all of the other organizations we reviewed make key governance documents, such as board bylaws, ethics policies, committee mission statements, and committee assignments, available to the public, most Reserve Banks do not post this information on their websites. As previously discussed, the size of Reserve Bank boards is established by the Federal Reserve Act of 1913 and sets each board’s size at nine directors. The size of the Reserve Bank boards is within the range of board sizes and composition that we identified at comparable organizations. As we have seen, the boards of the organizations we studied had from 9 to 23 members (see table 4). Neither the NYSE nor SEC has size requirements for the boards of listed and public companies, and most of the bank holding companies we reviewed included provisions in their bylaws that allowed for flexibility in board size. For example, one company’s bylaws state the board has the authority to determine the number of directors and that the number should be in the range of 13 to 19, with the flexibility to increase the size as needs and circumstances change. The number of allowable directors under the bylaws of bank holding companies we studied ranged from 3 to 36, while the actual number of directors on the boards of bank holding companies included in our study ranged from 11 to 15. According to NYSE, independence for directors means having no material relationship with the listed company, either directly or as a partner, shareholder, or officer of an organization that has a relationship with the company. Central bank literature typically refers to independence in terms of the central bank being independent of the government; therefore, independent directors are those who do not work for the central bank or other government entity. Independence is an important aspect of board governance because it provides accountability and an outside perspective. Further, the Organisation for Economic Co-operation and Development (OECD) notes that boards must be able to exercise objective judgment in order to fulfill their duties and that, to accomplish this goal, a sufficient number of board members should be independent of management. Reserve Bank directors have varying levels of independence. As discussed earlier in this report, Class C directors are appointed by the Federal Reserve Board. These directors are independent—that is, they are not employees or managers of the Reserve Banks at which they serve, nor are they a partner, shareholder, or officer of an organization that has a relationship with the Reserve Bank, such as a member bank. Class B directors are elected by member banks and are statutorily required to represent the public. They meet almost all of the independence requirements listed above, with the exception that they can be a stockholder in a bank. Class A directors, who represent the member banks that elect them, are the least independent of the Reserve Bank directors. Some have questioned whether Reserve Bank boards have enough independence from the member banks that the Reserve Banks supervise. FINRA’s bylaws balance public and industry representation by requiring that members representing the public outnumber those representing industry on the board. No FHLBank managers serve on the FHLBank boards, and by law at least two-fifths of the directors must be independent and not affiliated with member banks. Additionally, at least two of the independent directors must be “public interest” directors with at least 4 years of experience representing community or consumer interests in banking services, credit needs, housing, or consumer financial protections. Reserve Bank directors’ term lengths and limits were also within the range of term lengths and limits we observed for other comparable entities. For example, both Reserve Bank and FINRA directors can serve up to two consecutive 3-year terms. At the other four central banks we reviewed, independent directors—who are not government or central bank officials, or for the ECB, the board members from national central banks—served 3- to 5-year terms. Other board members (including governors and other government officials) served 5- to 8-year terms. FHLB directors may serve up to three consecutive 4-year terms. The NYSE and SEC do not have requirements for listed or public companies regarding term length or limits. Two of the large bank holding companies we reviewed opted to have directors serve 1-year terms so that each director had to be reelected by the stockholders each year, but none of the companies enacted term limits for their directors. One company noted in its annual proxy statement that although term limits might be a source of fresh ideas and viewpoints, they had the disadvantage of potentially reducing the knowledge and insight that experienced directors gained over time. Another bank holding company’s proxy statement said that the company favored monitoring individual director performance over term limits. Selection procedures for directors varied across the entities we examined. As we have discussed, Federal Reserve Bank boards consist of both appointed and elected directors. However, all the boards of the four central banks we reviewed had directors who were appointed to the board by various entities. For example, for the ECB Executive Board, members are nominated by the governments of euro-area member states. Both the ECB’s Governing Council and the European Parliament are consulted on prospective candidates and issue opinions on them. The European Parliament holds a hearing for the nominated candidate, and the European Council (only member states that have adopted the euro) votes to appoint a new Executive Board member. The 17 euro-area National Central Bank governors who are members of the Governing Council in addition to the 6 Executive Board members are selected according to national procedures. The directors of the Reserve Bank of Australia and independent directors of the Bank of Canada are appointed by the Treasurer and Minister of Finance, respectively. The Queen of England appoints governors and nonexecutive directors to the Court of Directors at the Bank of England. The other comparable organizations we studied had a combination of elected and appointed members and used nominating committees as part of the director selection process. FINRA’s bylaws require that all members be nominated by a committee and certified by the corporate secretary. Of the 10 industry directors, 7 are elected by their constituents. The 3 remaining industry directors and all of the public directors are appointed by FINRA’s Board of Governors after nomination by the committee. FHLBank member directors are nominated and voted on by member institutions within their state, whereas independent directors are nominated by the FHLBank’s board of directors, after consultation with its Advisory Council, and elected by the FHLBanks’ members at-large. Companies listed on the NYSE must have nominating/corporate governance committees composed entirely of independent directors to identify qualified individuals and select them or recommend them to the board for selection. Stockholders elect the directors of the 10 largest bank holding companies we reviewed. Like the Reserve Banks, other comparable entities also considered skills and experience as key factors in selecting board members. Reserve Banks recruit directors in accordance with the requirements in the Federal Reserve Act, which stipulate that directors shall be chosen without discrimination as to race, creed, color, sex, or national origin and that Class B and Class C directors who represent the public shall be elected “with due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumers.” Some Reserve Bank officials told us that while they strive to find diverse candidates from a variety of industries, they primarily want to find people who have the skills and knowledge that will fill gaps in the board’s existing knowledge and skill set. Similarly, all four central banks we reviewed had skill or experience qualifications for board members. For example, the Bank of Canada focuses on the collective skills of the board of directors in areas such as accounting, human resources, corporate governance, and financial markets. FHLBanks and FINRA also look for directors with particular skills and experience to complement the boards. FHLBank nonmember directors are required to have experience in, or knowledge of, one or more of the following areas: auditing and accounting, derivatives, financial management, organizational management, project development, risk management practices, and the law. FINRA officials stated that they had no written qualifications but added that for each opening they analyzed the type of expertise the board lacked—for example, technological, legal, or academic—to identify skills that would complement the existing expertise. SEC requires public companies to disclose information about the qualifications of directors and nominees for director and to provide reasons why each should serve but does not require specific types of experience or expertise. As with the Federal Reserve Banks, none of the comparable entities had specific requirements for gender or race and ethnic diversity for their boards. One central bank required that directors represent different geographies and industries within the country. As discussed earlier, public companies must report in their proxy and information statement on how the nominating committee considered diversity when reviewing candidates for director. In our analysis of the 10 largest bank holding companies in 2010, proxy statements indicated that companies primarily value candidates that will bring complementary skills and experience to the board but also consider diversity in selecting them. Reserve Banks and comparable institutions, both public and private, have a variety of accountability measures in place, including annual performance reviews of the organization and management, internal and self-assessments, and external audits. All 12 Reserve Bank boards conduct bankwide performance reviews on a yearly basis. Similarly, a committee of the board at the Bank of England—the Committee of the Court (NedCo)—is responsible for reviewing the bank’s performance in relation to its objectives and strategy, monitoring the extent to which its financial management objectives are met, reviewing the procedures of the Monetary Policy Committee and the bank’s internal controls, and determining the pay and terms of employment of the governors, executive directors and external Monetary Policy Committee members. To a large extent, NedCo’s work is done through the Court of Directors; it is chaired by the Court’s chairman and consists of all nonexecutive members. Internal reviews and self-assessments are also part of board accountability practices across the institutions that we reviewed. Within the Federal Reserve System, the Federal Reserve Board relies on RBOPS to oversee Reserve Banks’ management and operations. RBOPS reviews each Reserve Bank at least every 3 years. In addition, 6 of the 12 Federal Reserve Bank boards of directors conduct an annual self-evaluation. Some of the other organizations that we reviewed had similar evaluations conducted by their boards. For example, the Bank of similar evaluations conducted by their boards. For example, the Bank of Canada’s board conducts an annual self-assessment through one-on-one interviews between each director and the lead director, supported by a survey that solicits directors’ views on various elements of the board’s operations, governance, and effectiveness. The survey is completed electronically and aggregated results are distributed to directors for discussion in open session. The board also has developed and maintains a skills map of the current directors’ competencies and takes note of any gaps or deficiencies. Further, companies listed on the NYSE must adopt corporate governance guidelines that include provisions for the board to conduct a self-evaluation at least annually to determine whether it and its committees are functioning effectively. Reserve Bank boards and publicly listed companies also hold meetings of nonmanagement directors to promote accountability by encouraging nonmanagement directors to serve as a more effective check on management. All Federal Reserve Bank boards have executive committees that vary across banks in terms of the composition of Class A, B, and C directors (see app. III for more information on the Reserve Bank committees). The NYSE requires that nonmanagement directors of each listed company meet at regularly scheduled executive sessions without management. Some of the organizations that we reviewed, including the Federal Reserve Banks, had audit committees in place. Each Reserve Bank has an audit committee that oversees the bank’s internal auditor and reviews and approves the annual audit plan. The audit committee is also responsible for coordinating with external auditors and helping ensure that audit recommendations and concerns are properly addressed. Similarly, the Bank of England has two committees that play a role in accountability. First, as previously discussed, the Committee of the Court, NedCo, is responsible for conducting a performance assessment of the central bank. Second, the Risk and Audit Committee provides independent assurance to the Court of Directors that the bank’s internal controls are appropriate. The committee meets regularly and reviews the work of internal and external auditors, annual financial statements, and the appropriateness of the accounting policies and procedures adopted. It also makes recommendations on the appointment of the external auditors, including their independence and fees, and reviews the bank’s risk matrix and specific business controls. The Reserve Bank of Australia and the Bank of Canada also have audit committees that play a role similar to that of the Reserve Banks’ committees. FINRA’s bylaws require the board to have an audit committee of four or five governors, none of them officers or employees of the corporation and including at least two public governors. The audit committee’s functions are similar to those of committees at other organizations previously discussed. Finally, NYSE-listed companies are required to have audit committees with at least three independent members. NYSE guidelines stipulate that audit committees must assist with board oversight of the company’s financial statements, compliance with legal and regulatory requirements, the independent auditor’s qualifications and independence, and the performance of the company’s internal audit function and independent auditors. The audit committees are also responsible for SEC’s required disclosures on committee activity. Governance practices should be transparent to protect organizational reputation and help ensure accountability. Reserve Bank governance practices lack transparency compared with those of comparable institutions that we reviewed. We have previously reported that good governance, transparency, and accountability are critical in both the private and public sectors. In the private sector, they promote efficiency and effectiveness in the capital and credit markets, and overall economic growth, both domestically and internationally. In the public sector, they are essential to the effective and credible functioning of a healthy democracy and to fulfilling the government’s responsibility to citizens and taxpayers. Additionally, the World Bank, the International Monetary Fund, OECD, and other researchers agree that transparency is an important principle in good governance. While the Federal Reserve System has begun to increase the disclosure of information, more can be done to enhance the transparency of the Reserve Banks’ governance practices. Most Reserve Banks do not routinely disclose governance practices to the public, while most comparable institutions we reviewed do. For example, all four central banks we studied had public websites that displayed information about board governance, including information about the committee structure and conflict of interest policies. FINRA bylaws, including committee mission statements and conflict-of-interest rules, are also available on the FINRA website. The Federal Housing Finance Agency does not have any reporting requirements for FHLBanks, and while FHLBanks vary in what they publish on their websites, most provide some information. For example, three-quarters of the FHLBanks post information about their code of ethics, bylaws, or both, and half provide information about the election process, including time frames and independent director applications. One-third of the FHLBanks post biographical information about the directors beyond the director’s company, position, and location. 6 of the 12 FHLBanks post information about the board committees—either a description of each committee and its purpose or board members serving on each committee, and six FHLBanks publish the audit committee charter on their websites. Publicly traded companies were subject to the most stringent disclosure guidelines of the institutions we examined. The NYSE requires that listed companies publicly disclose corporate governance guidelines that address director qualification standards, responsibilities, compensation, and access to management and independent advisers, as well as director orientation and continuing education, management succession, and annual performance evaluations of the board. Corporate websites must be accessible from the United States, must clearly indicate in the English language the location of governance documents, and documents must be available in printable versions in English. By comparison, few of the Reserve Banks post information about board governance, such as committee structure and assignments, or conflict of interest and ethics policies on their websites. While the Federal Reserve Board notes vacant positions among its list of Reserve Bank board directors, the Reserve Banks do not publish information about vacant director positions on their websites. Additionally, all Reserve Banks have publicly accessible websites, but most banks post only the names, titles, and employers of current directors rather than richer biographical information. Four of the Reserve Banks provide descriptions of the board and their roles, and two banks post more comprehensive information. For example, FRBNY includes the board’s bylaws, biographies for current board members, the members and charters of each of the board’s committees, and the bank president’s daily schedule. Federal Reserve Bank of Kansas City posts information about the directors’ selection and roles, biographies for current directors, and lists alumni directors from 1992 to the present. A few individuals we spoke with noted that, in particular, Reserve Banks could be more transparent about director elections. One researcher stated that as a result of the lack of transparency around the director election process, there is a lack of understanding of how and why directors were chosen to serve on Reserve Bank boards. This can also cause increased concern about potential conflicts of interest among the directors because how and why certain individuals were selected for the board is not clear to the public. Further, in our survey of Reserve Bank directors, one director noted that transparency around the election process should be improved. The director noted that the topic was not discussed in board meetings or executive sessions of board meetings. Federal Reserve Board officials said that two Reserve Banks are publicly announcing board vacancies, but because Class A and B directors are elected by the member banks, Class C directors were the only vacancies for which the general public could apply. Further, officials said that while they could enhance transparency by advertising a vacant Class C position, the nature of the job and the need for a specific skill set generally meant that it was better for the banks themselves to recruit candidates instead of publicly seeking applications. Enhanced transparency of the director selection process, including posting director vacancies and selection procedures, could not only make the election process more transparent but also help increase the diversity of the candidate pool. Some of the institutions we reviewed have taken steps to increase transparency of their director selection process. Two of the central banks we reviewed publicized and solicited applications for governor/director positions. It was announced in July 2008 that the Bank of England would advertise vacant positions. Additionally, in Canada, a government website permits individuals to submit their names for consideration as directors of government. Also ministers responsible for entities requiring directors maintain a pool of all eligible candidates, so the Minister of Finance develops this pool of candidates for the Bank of Canada. As previously noted, about half of the FHLBanks publish information on their websites about the director election process and provide applications for potential candidates to submit to be considered by the nomination committee. Further, as previously mentioned, some Members of Congress and others raised questions about the governance of the Reserve Banks, including the selection and roles of directors. Improving the transparency of the Reserve Bank director selection process is one way to help address concerns about Reserve Bank governance. The Federal Reserve System has taken some important steps to increase transparency. For example, the Federal Reserve Board has recently taken steps to increase transparency of the monetary policy-making process. In March, the Federal Reserve Board announced that the Chairman would hold press briefings four times per year to present the Federal Open Market Committee’s current economic projections and to provide additional context for its policy decisions. The first press conference was held in April 2011. Additionally, some Reserve Banks have begun placing additional information about governance arrangements on their public websites. The Federal Reserve Board describes these postings as a recent trend and said that FRBNY has been a leader in this area. Further, the Reserve Bank boards conduct community outreach that focuses primarily on financial literacy and informing the public on their role in monetary policy. One of the three main roles for Reserve Bank directors is to be a liaison between the bank and the community. Several directors and bank officials told us that they believe that public outreach was necessary to help reduce the public’s misperception about the roles and responsibilities of the Reserve Banks. In our survey of Reserve Bank directors, some directors noted that outreach should be continued to create a more transparent environment and strengthen governance. For example, one director said that one way to strengthen Reserve Bank governance was to continue to foster an environment of transparency, with open and frequent communication. Further, the director noted that not everyone understood the difference between monetary and fiscal policy and that the Reserve Banks could help to educate the general public and the media. One director also noted that outreach activities generated goodwill and awareness throughout the community and the district and led to better public representation on Reserve Bank boards. Additionally, another director noted that the Reserve Banks needed to continue their outreach to educate the public about monetary policy and the need for an independent Federal Reserve System but cited the Reserve Banks’ budget constraints as a limitation to their outreach efforts. Officials at the Federal Reserve Board noted that the Federal Reserve System functions more effectively and efficiently when each Reserve Bank is implementing good governance procedures because good corporate governance is a key element in improving economic efficiency. Additionally, in a time when the relationships between directors and financial firms are being questioned, transparent governance practices can help in managing reputational risk. Moreover, when there is increased public interest in governance, the Federal Reserve System would be well served by making clear the roles and responsibilities of Reserve Bank directors. Moreover, without more public disclosure of governance arrangements, such as board bylaws and conflict-of-interest policies, there will be continued concerns about Reserve Bank governance. The Federal Reserve System was designed as a decentralized entity with a governmental institution and 12 separately incorporated Reserve Banks. Under this public-private partnership, the Reserve Bank directors serve a role in bringing information from their communities to inform the monetary policy deliberations of the central bank and helping oversee the operations of the Reserve Banks. The directors, like the Federal Reserve Board, are also part of the governance framework of the Reserve Banks. However, the operations and governance of the Federal Reserve System came to the forefront during the 2007-2009 financial crisis when it played a prominent role in stabilizing financial markets through the use of its emergency lending authorities. These unprecedented actions resulted in Congress and the public raising questions about the Reserve Banks’ governance practices and potential conflicts of interest involving the directors. Specifically, some questioned how well the Reserve Bank boards represent the public, which in part could be measured by the economic and demographic diversity of the directors. Our analysis shows that from 2006 through 2010 labor and consumer groups tended to be less represented than other industry groups on both head office and branch boards. While the Federal Reserve Board encouraged the Reserve Banks to recruit directors from consumer and labor organizations, restrictions on directors’ political activities appeared to be a challenge in recruiting representatives from these organizations, who tend to be politically active. Our analysis also shows that while there is some variation among the Reserve Banks in the representation of women and minorities at head office and branch boards, overall, it has remained limited. Although it is difficult to know whether the board’s decisions would have been different had there been greater diversity on the boards, the public that the board represents is becoming increasingly diverse. Officials from most Reserve Banks generally focus their search for candidates on senior corporate executives, who are perceived to have a relatively broad perspective on the economy. However, seeking directors from among senior or chief- level executives may contribute to the limited diversity on the boards because as our analysis of EEOC data shows, diversity at the senior executive level is more limited than at the senior manager level across industries. To the extent that director searches are limited to chief-level executives, the Reserve Banks not only limit the diversity of the pool of potential candidates but also risk limiting the perspectives shared about the economy in the formation of monetary policy. The statutory requirement for three classes of directors was intended to provide representation of both stockholding banks and the public. However, the existence of Class A and to a lesser extent Class B directors on the boards creates an appearance of a conflict of interest, particularly in matters involving supervision and regulation. Moreover, directors from all three classes could have past and current affiliations with financial institutions. These affiliations have given rise to relationships that pose reputational risk to the Reserve Banks. While director conflicts can be identified and managed, interconnectedness between directors and financial institutions cannot be eliminated; therefore, ongoing challenges remain. For example, the credibility of the Federal Reserve System will be affected by the perceived effectiveness of its ability to manage conflict issues. While the Federal Reserve System has recognized the importance of public perception and made changes to Reserve Bank governance practices, more could be done to increase the flow of information on the directors’ roles to the public and strengthen controls. Specifically, greater transparency could assist the public in understanding the roles and functioning of the Reserve Bank boards, such as clarifying the limited nature of Reserve Bank directors’ involvement in supervision and regulation operations with a statement in the Reserve Bank board bylaws could help to improve the public’s confidence in Reserve Bank governance. While waivers are one way the Federal Reserve System mitigates conflicts involving Federal Reserve Board eligibility requirements, not all Reserve Banks have procedures for requesting a waiver from the Federal Reserve Board. Moreover, if waivers are granted, there is no requirement to make that information public. Failing to make the process and decisions more transparent can decrease confidence in the Federal Reserve System and has resulted in questions about the integrity of Reserve Banks’ operations and the appearance of conflicts of interest. Finally, while the Federal Reserve System has taken steps to increase transparency of governance practices as well as transparency overall, Reserve Bank governance practices were generally not as transparent as those of other central banks and financial institutions that we studied. In a time when the Federal Reserve System’s emergency actions have resulted in relationships between Reserve Banks and directors and the relationships between directors and financial firms being questioned, more transparent governance practices are essential to the effective and credible functioning of the Reserve Banks and the Federal Reserve System as a whole. While the Federal Reserve System has taken some steps to increase the transparency of its governance practices, such as conducting quarterly press conferences after the FOMC meetings, additional actions such as making key governance documents easily accessible to the public could enhance transparency and protect organizational reputation. Moreover, without more public disclosure of governance arrangements, such as board of director bylaws and director eligibility and ethics policies, there may be continued concerns about Reserve Bank governance and the integrity of the Federal Reserve System. While the Federal Reserve System recently has made changes to Reserve Bank governance, it can take additional steps to strengthen controls designed to manage conflicts of interest involving Reserve Bank directors and increase public disclosure of directors’ roles and responsibilities. As such, we recommend that the Chairman of the Federal Reserve Board take the following four actions: To help enhance economic and demographic diversity and broaden perspectives among Reserve Bank directors who are elected to represent the public, encourage all Reserve Banks to consider ways to broaden their pools of potential candidates for directors, such as including officers who are below the senior executive level at their organizations. To further promote transparency, direct all Reserve Banks to clearly document the roles and responsibilities of the directors, including restrictions on their involvement in supervision and regulation activities, in their bylaws. As part of the Federal Reserve System’s continued focus on strengthening governance practices, develop, document, and require all Reserve Banks to adopt a process for requesting waivers from the Federal Reserve Board director eligibility policy and ethics policy for directors. Further, consider requiring Reserve Banks to publicly disclose waivers that are granted to the extent disclosure would not violate a director’s personal privacy. To enhance the transparency of Reserve Bank board governance, direct the Reserve Banks to make key governance documents, such as such as board of director bylaws, committee charters and membership, and Federal Reserve Board director eligibility policy and ethics policy, available on their websites or otherwise easily accessible to the public. We provided copies of this draft report to the Federal Reserve Board and the 12 Federal Reserve Banks for their review and comment. The Federal Reserve Board and the Reserve Banks provided written comments that we have reprinted in appendixes V and VI, respectively. The Federal Reserve Board and Reserve Banks also provided technical comments that we have incorporated as appropriate. In its written comments, the Federal Reserve Board agreed that our recommendations have merit and to work to implement each of them. In particular, regarding our first recommendation on broadening the pools of candidates for the Reserve Bank directors, the Federal Reserve Board stated that, as we did in the report, several of the Reserve Banks are already considering qualified candidates who are not chief executives, as we have recommended, and the Federal Reserve Board will continue to explore ways to broaden the pool of candidates to increase diversity on Reserve Bank boards. We believe that diverse perspectives can enhance the formation of monetary policies. With respect to our three recommendations to improve transparency, the Federal Reserve Board stated that it will work with the Reserve Banks to consider ways to more clearly include the directors’ roles and responsibilities in the bylaws and the Federal Reserve System will continue to ensure that Reserve Bank directors are fully aware of their roles and the policies that govern their positions on the Reserve Bank boards. Further, as we noted in the report, the Federal Reserve Board stated that in 2009 it adopted a process for Reserve Banks to request waivers from the eligibility policy and will consider adopting a process for waivers to the Guide to Conduct as well. In addition, it will consider making public any waivers granted, with due regard for protecting personal privacy. The Federal Reserve Board also stated that it will post various Reserve Bank director-related publications on its website and will work with the Reserve Banks to make available to the public other relevant governance documents and information. We believe that greater transparency could assist the public in understanding the roles and functioning of the Reserve Bank boards and help increase public confidence in the Federal Reserve System. In its written comments, the Federal Reserve Banks stated that diversity and transparency are attributes valued and supported uniformly by all Reserve Banks. They stated that they welcomed our recommendation for Reserve Banks to consider ways to broaden the pool of potential candidates and reiterated that some Reserve Banks have already been considering qualified candidates who are not chief executives. They also agreed that transparency could be enhanced by our other recommendations. We are sending copies of this report to the majority and minority leaders of the Senate and the House of Representatives, appropriate congressional committees, the Board of Governors of the Federal Reserve System, the 12 Federal Reserve Banks, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Orice Williams Brown at [email protected] or (202) 512-8678. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Between late 2007 and early 2009, the Federal Reserve Board created more than a dozen new emergency programs to stabilize financial markets and provided financial assistance to avert the failures of a few individual institutions. The Federal Reserve Board authorized most of this emergency assistance under emergency authority contained in section 13(3) of the Federal Reserve Act. Three of the programs covered by this review—Term Auction Facility (TAF), dollar swap lines with foreign central banks, and the Agency Mortgage-Backed Securities (MBS) Purchase program—were authorized under other provisions of the Federal Reserve Act that do not require a determination that emergency conditions exist, although the swap lines and the Agency MBS Purchase Program did require authorization by the Federal Open Market Committee (FOMC). In many cases, the decisions by the Federal Reserve Board, the FOMC, and the Reserve Banks about the authorization, initial terms of, and implementation of the Federal Reserve System’s emergency assistance were made over the course of only days or weeks as the Federal Reserve Board sought to act quickly to address rapidly deteriorating market conditions. As illustrated in table 5, the Federal Reserve Bank of New York (FRBNY) implemented most of these emergency activities under authorization from the Federal Reserve Board. In 2009, FRBNY, at the direction of the FOMC, began large-scale purchases of MBS issued by the housing government-sponsored enterprises, Fannie Mae and Freddie Mac, or guaranteed by Ginnie Mae. Purchases of these agency MBS were intended to provide support to the mortgage and housing markets and to foster improved conditions in financial markets more generally. Most of the Federal Reserve Board’s broad-based emergency programs closed on February 1, 2010. Figure 11 provides a timeline for the establishment, modification, and termination of Federal Reserve System emergency programs subject to this review. In the months before the authorization of TAF and new swap line arrangements, which were the first of the emergency programs subject to this review, the Federal Reserve Board took steps to ease emerging strains in credit markets through its traditional monetary policy tools. In late summer 2007, sudden strains in term interbank lending markets emerged primarily due to intensifying investor concerns about commercial banks’ actual exposures to various mortgage-related securities. The cost of term funding (loans provided at terms of 1 month or longer) spiked suddenly in August 2007, and commercial banks increasingly had to borrow overnight to meet their funding needs. The Federal Reserve Board feared that the disorderly functioning of interbank lending markets would impair the ability of commercial banks to provide credit to households and businesses. To ease stresses in these markets, on August 17, 2007, the Federal Reserve Board made two temporary changes to the terms at which Reserve Banks extended loans through the discount window. First, it approved the reduction of the discount rate—the interest rate at which the Reserve Banks extended collateralized loans at the discount window—by 50 basis points. Second, to address specific strains in term-funding markets, the Federal Reserve Board approved extending the discount window lending term from overnight to up to 30 days, with the possibility of renewal. According to a Federal Reserve Board study, this change initially resulted in little additional borrowing from the discount window. In addition to the discount window changes, starting in September 2007, the FOMC announced a series of reductions in the target federal funds rate—the FOMC-established target interest rate that banks charge each other for loans. In October 2007, tension in term funding subsided temporarily. However, issues reappeared in late November and early December, possibly driven in part by a seasonal contraction in the supply of year-end funding. On December 12, 2007, the Federal Reserve Board announced the creation of TAF to address continuing disruptions in U.S. term interbank lending markets. The Federal Reserve Board authorized Reserve Banks to extend credit through TAF by revising the regulations governing Reserve Bank discount window lending. TAF was intended to help provide term funding to depository institutions eligible to borrow from the discount window. In contrast to the traditional discount window program, which loaned funds to individual institutions at the discount rate, TAF was designed to auction loans to many eligible institutions at once at a market-determined interest rate. Federal Reserve Board officials noted that one important advantage of this auction approach was that it could address concerns among eligible borrowers about the perceived stigma of discount window borrowing. Federal Reserve Board officials noted that an institution might be reluctant to borrow from the discount window out of concern that its creditors and other counterparties might become aware of its discount window use and perceive it as a sign of distress. The auction format allowed banks to approach the Reserve Banks collectively rather than individually and obtain funds at an interest rate set by auction rather than at a premium set by the Federal Reserve Board. Additionally, whereas discount window loan funds could be obtained immediately by an institution facing severe funding pressures, TAF borrowers did not receive loan funds until 3 days after the auction. For these reasons, TAF-eligible borrowers may have attached less of a stigma to auctions than to traditional discount window borrowing. The first TAF auction was held on December 17, 2007, with subsequent auctions occurring approximately every 2 weeks until the final TAF auction on March 8, 2010. Concurrent with the announcement of TAF, the FOMC announced the establishment of dollar swap arrangements with two foreign central banks to address similar disruptions in dollar funding markets abroad. In a typical swap line transaction, FRBNY exchanged dollars for the foreign central bank’s currency at the prevailing exchange rate, and the foreign central bank agreed to buy back its currency (to “unwind” the exchange) at this same exchange rate at an agreed upon future date. The market for interbank funding in U.S. dollars is global, and many foreign banks hold U.S.-dollar-denominated assets and fund these assets by borrowing in U.S. dollars. In contrast to U.S. commercial banks, foreign banks did not hold significant U.S.-dollar deposits, and as a result, dollar funding disruptions were particularly acute for many foreign banks during the recent crisis. In December 2007, the European Central Bank and Swiss National Bank requested dollar swap arrangements with the Federal Reserve System to increase their ability to provide U.S. dollar loans to banks in their jurisdictions. Federal Reserve Board staff memorandums recommending that the FOMC approve these swap arrangements noted that continuing tension in dollar funding markets abroad could further exacerbate tensions in U.S. funding markets. On December 6, 2007, the FOMC approved requests from the European Central Bank and Sw National Bank and authorized FRBNY to establish temporary swap lines under section 14 of the Federal Reserve Act. During 2008, the FOMC iss approved temporary swap lines with 12 other foreign central banks. FRBNY’s swap lines with the 14 central banks closed on February 1, 2010. In May 2010, to address the re-emergence of strains in dollar funding markets, FRBNY reopened swap lines with the Bank of Canada, the Bank of England, the European Central Bank, the Bank of Japan, and the Swiss National Bank through January 2011. On December 21, 2010, the FOMC announced an extension of these lines through August 1, 2011. On June 29, 2011, the Federal Reserve Board announced an extension of these swap lines through August 1, 2012. In early March 2008, the Federal Reserve Board observed growing tension in the repurchase agreement markets—large, short-term collateralized funding markets—that many financial institutions rely on to finance a wide range of securities. Under a repurchase agreement, a borrowing institution generally acquires funds by selling securities to a lending institution and agreeing to repurchase the securities after a specified time at a given price. The securities, in effect, are collateral provided by the borrower to the lender. In the event of a borrower’s default on the repurchase transaction, the lender would be able to take (and sell) the collateral provided by the borrower. Lenders typically will not provide a loan for the full market value of the posted securities, and the difference between the values of the securities and the loan is called a margin or haircut. This deduction is intended to protect the lenders against a decline in the price of the securities provided as collateral. In early March, the Federal Reserve Board found that repurchase agreement lenders were requiring higher haircuts for loans against a range of securities and were becoming reluctant to lend against mortgage-related securities. As a result, many financial institutions increasingly had to rely on higher-quality collateral, such as U.S. Treasury securities, to obtain cash in these markets, and a shortage of such high-quality collateral emerged. In March 2008, the Federal Reserve Board cited “unusual and exigent circumstances” in invoking section 13(3) of the Federal Reserve Act to authorize FRBNY to implement four emergency actions to address deteriorating conditions in these markets: (1) TSLF, (2) a bridge loan to Bear Stearns, (3) a commitment to lend up to $30 billion against Bear Stearns assets that resulted in the creation of Maiden Lane LLC, and (4) PDCF. On March 11, 2008, the Federal Reserve Board announced the creation of the TSLF to auction 28-day loans of U.S. Treasury securities to primary dealers to increase the amount of high-quality collateral available for these dealers to borrow against in the repurchase agreement markets. Through competitive auctions that allowed dealers to bid a fee to exchange harder-to-finance collateral for easier-to-finance Treasury securities, TSLF was intended to promote confidence among lenders and to reduce the need for dealers to sell illiquid assets into the markets, which could have further depressed the prices of these assets and contributed to a downward price spiral. TSLF auctioned loans of Treasury securities against two schedules of collateral. Schedule 1 collateral included treasuries, agency debt, and agency MBS collateral that FRBNY accepted in repurchase agreements for traditional open market operations with primary dealers. Schedule 2 included schedule 1 collateral as well as a broader range of assets, including highly rated mortgage-backed securities. The Federal Reserve Board determined that providing funding support for private mortgage-backed securities through the schedule 2 auctions fell outside the scope of FRBNY’s authority to conduct its securities lending program under section 14 of the Federal Reserve Act. Accordingly, for the first time during this crisis, the Federal Reserve Board invoked section 13(3) of the Federal Reserve Act to authorize the extension of credit, in this case in the form of Treasury securities, to nondepository institutions—in this case, the primary dealers. As discussed later in this appendix the Federal Reserve Board later expanded the range of collateral eligible for TSLF as the crisis intensified. TSLF closed on February 1, 2010. Shortly following the announcement of TSLF, the Federal Reserve Board invoked its emergency authority for a second time to authorize an emergency loan to avert a disorderly failure of Bear Stearns. TSLF was announced on March 11, 2008, and the first TSLF auction was held on March 27, 2008. Federal Reserve Board officials noted that although TSLF was announced to address market tensions affecting many firms, some market participants concluded that its establishment was driven by specific concerns about Bear Stearns. Over a few days, Bear Stearns experienced a run on its liquidity as many of its lenders grew concerned that the firm would suffer greater losses in the future and stopped providing funding to the firm, even on a fully secured basis with high- quality assets provided as collateral. Late on Thursday, March 13, 2008, the senior management of Bear Stearns notified the Federal Reserve that it would likely have to file for bankruptcy protection the following day unless the Federal Reserve provided the firm with an emergency loan. The Federal Reserve Board feared that the sudden failure of Bear Stearns could have serious adverse impacts on markets in which Bear Stearns was a significant participant, including the repurchase agreements market. In particular, a Bear Stearns failure may have threatened the liquidity and solvency of other large institutions that relied heavily on short-term secured funding markets. On Friday, March 14, 2008, the Federal Reserve Board voted to authorize FRBNY to provide a $12.9 billion loan to Bear Stearns through JP Morgan Chase Bank, National Association, the largest bank subsidiary of JP Morgan Chase & Co. (JPMC), and to accept $13.8 billion of Bear Stearns assets as collateral. This back-to-back loan transaction was repaid on Monday, March 17, 2008, with almost $4 million of interest. This emergency loan enabled Bear Stearns to avoid bankruptcy and continue to operate through the weekend. This provided time for potential acquirers, including JPMC, to assess Bear Stearns’s financial condition and for FRBNY to prepare a new liquidity program, PDCF, to address strains that could emerge from a possible Bear Stearns bankruptcy announcement the following Monday. Federal Reserve Board and FRBNY officials hoped that bankruptcy could be averted by the announcement that a private sector firm would acquire Bear Stearns and stand behind its liabilities when the markets reopened on the following Monday. On Sunday, March 16, 2008, the Federal Reserve Board announced that FRBNY would lend up to $30 billion against certain Bear Stearns assets to facilitate JPMC’s acquisition of Bear Stearns. Over the weekend, JPMC had emerged as the only viable acquirer of Bear Stearns. In congressional testimony, Timothy Geithner, who was the President of FRBNY in March 2008, provided the following account: “Bear approached several major financial institutions, beginning on March 13. Those discussions intensified on Friday and Saturday. Bear’s management provided us with periodic progress reports about a possible merger. Although several different institutions expressed interest in acquiring all or part of Bear, it was clear that the size of Bear, the apparent risk in its balance sheet, and the limited amount of time available for a possible acquirer to conduct due diligence compounded the difficulty. Ultimately, only JPMorgan Chase was willing to consider an offer of a binding commitment to acquire the firm and to stand behind Bear’s substantial short-term obligations.” According to FRBNY officials, on the morning of Sunday, March 16, 2008, JPMC’s Chief Executive Officer told FRBNY that the merger would be possible only if certain mortgage-related assets were taken off Bear Stearns’s balance sheet. Negotiations between JPMC and FRBNY senior management resulted in a preliminary agreement under which FRBNY would make a $30 billion nonrecourse loan to JPMC collateralized by these Bear Stearns assets. A March 16, 2008, letter from then-FRBNY president Geithner to JPMC’s Chief Executive Officer documented the terms of the preliminary agreement. Significant issues that threatened to unravel the merger agreement emerged soon after the announcement. Bear Stearns board members and shareholders thought JPMC’s offer to purchase the firm at $2 per share was too low and threatened to vote against the merger. Perceived ambiguity in the terms of the merger agreement raised further concerns that JPMC could be forced to stand behind Bear Stearns’s obligations even in the event that the merger was rejected. Moreover, some Bear Stearns counterparties stopped trading with Bear Stearns because of uncertainty about whether JPMC would honor certain Bear Stearns obligations. FRBNY also had concerns with the level of protection provided under the preliminary lending agreement, under which FRBNY had agreed to lend on a nonrecourse basis against risky collateral. The risks of an unraveled merger agreement included a possible Bear Stearns bankruptcy and losses for JPMC, which might have been legally required to stand behind the obligations of a failed institution. Recognizing the risk that an unraveled merger posed to JPMC and the broader financial markets, FRBNY officials sought to renegotiate the lending agreement. During the following week, the terms of this agreement were renegotiated, resulting in the creation of a new lending structure in the form of Maiden Lane LLC. From March 17 to March 24, 2008, FRBNY, JPMC, and Bear Stearns engaged in dual track negotiations to address each party’s concerns with the preliminary merger and lending agreements. On March 24, 2008, FRBNY and JPMC agreed to a new lending structure that incorporated greater loss protections for FRBNY. Specifically, FRBNY created a special-purpose vehicle (SPV), Maiden Lane LLC, that used proceeds from a $28.82 billion FRBNY senior loan and a $1.15 billion JPMC subordinated loan to purchase Bear Stearns assets. While one team of Federal Reserve Board and FRBNY staff worked on options to avert a Bear Stearns failure, another team worked to ready PDCF for launch by Monday, March 17, 2008, when Federal Reserve Board officials feared a Bear Stearns bankruptcy announcement might trigger runs on the liquidity of other primary dealers. The liquidity support from TSLF would not become available until the first TSLF auction later in the month. On March 16, 2008, the Federal Reserve Board announced the creation of PDCF to provide overnight collateralized cash loans to the primary dealers. FRBNY quickly implemented PDCF by leveraging its existing legal and operational infrastructure for its existing repurchase agreement relationships with the primary dealers. Although the Bear Stearns bankruptcy was averted, PDCF commenced operation on March 17, 2008, and in its first week extended loans to 10 primary dealers. Bear Stearns was consistently the largest PDCF borrower until June 2008. Eligible PDCF collateral initially included investment-grade corporate securities, municipal securities, and asset-backed securities, including mortgage-backed securities. The Federal Reserve Board authorized an expansion of collateral types eligible for PDCF loans later in the crisis. This program was terminated on February 1, 2010. In September 2008, the bankruptcy of Lehman Brothers triggered an intensification of the financial crisis, and the Federal Reserve Board modified the terms for its existing liquidity programs to address worsening conditions. On September 14, 2008, shortly before Lehman Brothers announced it would file for bankruptcy, the Federal Reserve Board announced changes to TSLF and PDCF to provide expanded liquidity support to primary dealers. Specifically, the Federal Reserve Board announced that TSLF-eligible collateral would be expanded to include all investment-grade debt securities and PDCF-eligible collateral would be expanded to include all securities eligible to be pledged in the triparty repurchase agreements system, including noninvestment grade securities and equities. In addition, TSLF schedule 2 auctions would take place weekly rather than only biweekly. On September 21, 2008, the Federal Reserve Board announced that it would extend credit—on terms similar to those applicable for PDCF loans—to the U.S. and London broker-dealer subsidiaries of Merrill Lynch & Co. (Merrill Lynch), Goldman Sachs Group Inc. (Goldman Sachs), and Morgan Stanley to provide support to these subsidiaries as they became part of bank holding companies that would be regulated by the Federal Reserve System. On September 29, 2008, the Federal Reserve Board also announced expanded support through TAF and the dollar swap lines. Specifically, the Federal Reserve Board doubled the amount of funds that would be available in each TAF auction cycle from $150 billion to $300 billion, and the FOMC authorized a $330 billion expansion of the swap line arrangements with foreign central banks. In the months following Lehman’s bankruptcy, the Federal Reserve Board authorized several new liquidity programs under section 13(3) of the Federal Reserve Act to provide support to other key funding markets, such as the commercial paper and the asset-backed security markets. In contrast to earlier emergency programs that represented relatively modest extensions of established Federal Reserve System lending or open market operation activities, these newer programs incorporated more novel design features and targeted new market participants with which the Reserve Banks had not historically transacted. As was the case with the earlier programs, many of these newer programs were designed and launched under extraordinary time constraints as the Federal Reserve Board sought to address rapidly deteriorating market conditions. In order of their announcement, these programs included (1) Asset- Backed Commercial Paper Money Market Mutual Fund Liquidity Facility (AMLF) to provide liquidity support to money market mutual funds (MMMF) in meeting redemption demands from investors and to foster liquidity in the asset-backed commercial paper (ABCP) market, (2) Commercial Paper Funding Facility (CPFF) to provide a liquidity backstop to eligible issuers of commercial paper, (3) the Money Market Investor Funding Facility (MMIFF) to serve as an additional backstop for MMMFs, and (4) the Term Asset-Backed Securities Loan Facility (TALF) to assist certain securitization markets that supported the flow of credit to households and businesses. On September 19, 2008, the Federal Reserve Board authorized FRBB to establish AMLF to provide liquidity support to MMMFs facing redemption pressures. According to FRBB staff, the processes and procedures to implement AMLF were designed over the weekend before FRBB commenced operation of AMLF on September 22, 2008. MMMFs were a major source of short-term credit for financial institutions, including through MMMFs’ purchases and holdings of ABCP. ABCP continued to be an important source of funding for many businesses. Following the announcement that a large MMMF had “broken the buck”—net asset value fell below $1 per share—as a result of losses on Lehman’s commercial paper, other MMMFs faced a large wave of redemption requests as investors sought to limit their potential exposures to the financial sector. The Federal Reserve Board was concerned that attempts by MMMFs to raise cash through forced sales of ABCP and other assets into illiquid markets could further depress the prices of these assets and exacerbate strains in short-term funding markets. AMLF’s design, which relied on intermediary borrowers to use Reserve Bank loans to fund the same-day purchase of eligible ABCP from MMMFs, reflected the need to overcome practical constraints in lending to MMMFs directly. According to Federal Reserve System officials, MMMFs would have had limited capacity to borrow directly from the Reserve Banks in amounts that would be sufficient to meet redemption requests because of statutory and fund- specific limitations on fund borrowing. To quickly support the MMMF market, the Federal Reserve Board authorized loans to entities that conduct funding and custodial activities, which include holding and administering the accounts with MMMF assets, with MMMFs to fund the purchase of ABCP from MMMFs. Eligible borrowers were identified as discount-window-eligible depository institutions (U.S. depository institutions and U.S. branches and agencies of foreign banks) and U.S. bank holding companies and their U.S. broker-dealer affiliates. The interest rate on AMLF loans was lower than the returns on eligible ABCP, providing incentives for eligible intermediary borrowers to participate. AMLF closed on February 1, 2010. On October 7, 2008, the Federal Reserve Board announced the creation of CPFF to provide a liquidity backstop to U.S. issuers of commercial paper. Commercial paper is an important source of short-term funding for U.S. financial and nonfinancial businesses. CPFF became operational on October 27, 2008, and was operated by FRBNY. In establishing CPFF, FRBNY created an SPV that was to directly purchase new issues of eligible ABCP and unsecured commercial paper with the proceeds of loans it received from FRBNY for that purpose. In the weeks leading up to CPFF’s announcement, the commercial paper markets showed clear signs of strain: the volume of commercial paper outstanding declined, interest rates on longer-term commercial paper increased significantly, and increasing amounts of commercial paper were issued on an overnight basis as money market funds and other investors became reluctant to purchase commercial paper at longer-dated maturities. During this time, MMMFs faced a surge of redemption demands from investors concerned about losses on presumably safe instruments. The Federal Reserve Board concluded that disruptions in the commercial paper markets, combined with tension in other credit markets, threatened the broader economy as many large commercial paper issuers promoted the flow of credit to households and businesses. By standing ready to purchase eligible commercial paper, CPFF was intended to eliminate much of the risk that commercial paper issuers would be unable to issue new commercial paper to replace their maturing commercial paper obligations. By reducing this risk, CPFF was expected to encourage investors to continue or resume their purchases of commercial paper at longer maturities. CPFF closed on February 1, 2010. On October 21, 2008, the Federal Reserve Board authorized FRBNY to work with the private sector to create MMIFF to serve as an additional backstop for MMMFs. MMIFF complemented AMLF by standing ready to purchase a broader range of short-term debt instruments held by MMMFs, including certificates of deposit and bank notes. MMIFF’s design featured a complex lending structure through which five SPVs would purchase eligible instruments from eligible funds. In contrast to other Federal Reserve Board programs that created SPVs, MMIFF SPVs were set up and managed by private sector entities. According to FRBNY staff, JPMC, in collaboration with other firms that sponsored large MMMFs, brought the idea for an MMIFF-like facility to FRBNY in early October 2008, FRBNY worked with JPMC to set up the MMIFF SPVs but did not contract directly with JPMC or the firm that managed the MMIFF program. While MMIFF became operational in late November 2008, it was never used. In November 2008, the Federal Reserve Board authorized FRBNY to create TALF to reopen the securitization markets in an effort to improve access to credit for consumers and businesses. During the recent financial crisis, the value of many asset-backed securities (ABS) dropped precipitously, bringing originations in the securitization markets to a virtual halt. Problems in the securitization markets threatened to make it more difficult for households and small businesses to access the credit that they needed to, among other things, buy cars and homes and expand inventories and operations. TALF provided nonrecourse loans to eligible U.S. companies and individuals in return for collateral in the form of securities that could be forfeited if the loans were not repaid. TALF was one of the more operationally complex programs, and the first TALF subscription was not held until March 2009. In contrast to other programs that had been launched in days or weeks, TALF required several months of preparation to refine program terms and conditions and consider how to leverage vendor firms to best achieve TALF policy objectives. TALF closed on June 30, 2010. In late 2008 and early 2009, the Federal Reserve Board again invoked its authority under section 13(3) of the Federal Reserve Act to authorize assistance to avert the failures of three institutions that it determined to be systemically significant (1) American International Group, Inc. (AIG); (2) Citigroup, Inc. (Citigroup); and (3) Bank of America Corporation (Bank of America). In September 2008, the Federal Reserve Board and the Treasury determined through analysis of information provided by AIG and insurance regulators, as well as publicly available information, that market events could cause AIG to fail, which would pose systemic risk to financial markets. The Federal Reserve Board and subsequently Treasury took steps to ensure that AIG obtained sufficient liquidity and could complete an orderly sale of some of its operating assets and continue to meet its obligations. On September 16, 2008, one day after the Lehman Brothers bankruptcy announcement, the Federal Reserve Board authorized FRBNY to provide a revolving credit facility (RCF) of up to $85 billion to help AIG meet its obligations. The AIG RCF was created to provide AIG with a revolving loan that AIG and its subsidiaries could use to address strains on their liquidity. The announcement of this assistance followed a downgrade of the firm’s credit rating, which had prompted collateral calls by its counterparties and raised concerns that a rapid failure of the company would further destabilize financial markets. Two key sources of AIG’s difficulties were AIG Financial Products Corp. (AIGFP) and a securities lending program operated by insurance subsidiaries of AIG. AIGFP faced growing collateral calls on credit default swaps it had written on collateralized debt obligations (CDO). Meanwhile, AIG faced demands on its liquidity from securities lending counterparties who were returning borrowed securities and demanding that AIG return their cash collateral. Despite the announcement of the AIG RCF, AIG’s condition continued to decline rapidly in fall 2008. On subsequent occasions, the Federal Reserve Board invoked section 13(3) of the Federal Reserve Act to authorize either new assistance or a restructuring of existing assistance to AIG. First, in October 2008, the Federal Reserve Board authorized the creation of the securities borrowing facility (SBF) to provide up to $37.8 billion of direct funding support to a securities lending program operated by AIG’s domestic insurance companies. From October 8, 2008, through December 11, 2008, FRBNY provided cash loans to AIG’s domestic life insurance companies, collateralized by investment grade debt obligations. In November 2008, as part of plans to restructure the assistance to AIG to further strengthen its financial condition, and once again avert the failure of the company, the Federal Reserve Board and Treasury restructured AIG’s debt. Under the restructured terms, Treasury purchased $40 billion in shares of AIG preferred stock and the cash from the sale was used to pay down a portion of AIG’s outstanding balance from the AIG RCF. The limit on the facility also was reduced to $60 billion, and other changes were made. Also in November 2008, the Federal Reserve Board authorized the creation of two SPVs—Maiden Lane II LLC and Maiden Lane III LLC—to purchase certain AIG-related assets. Similar to Maiden Lane LLC, these SPVs funded most of these asset purchases with a senior loan from FRBNY. Maiden Lane II replaced the AIG SBF and served as a longer-term solution to the liquidity problems facing AIG’s securities lending program. Maiden Lane III purchased the underlying CDOs from AIG counterparties in connection with the termination of credit default swap contracts issued by AIGFP and thus the elimination of liquidity drain from collateral calls on the credit default swaps sold by AIGFP. In March 2009, the Federal Reserve Board and Treasury announced plans to further restructure AIG’s assistance. According to the Federal Reserve Board, debt owed by AIG on the AIG RCF would be reduced by $25 billion in exchange for FRBNY’s receipt of preferred equity interests totaling $25 billion in two SPVs. AIG created both SPVs to hold the outstanding common stock of two life insurance company subsidiaries—American Life Insurance Company and AIA Group Limited. Also in March 2009, the Federal Reserve Board authorized FRBNY to provide additional liquidity to AIG by extending credit by purchasing a contemplated securitization of income from certain AIG life insurance operations. FRBNY staff said this life insurance securitization option was abandoned for a number of reasons, including that it would have required FRBNY to manage a long-term exposure to life insurance businesses with which it had little experience. On November 23, 2008, the Federal Reserve Board authorized FRBNY to provide a lending commitment to Citigroup as part of a package of coordinated actions by Treasury, FDIC, and the Federal Reserve Board to avert a disorderly failure of the company. As discussed in our April 2010 report on Treasury’s use of the systemic risk determination, Treasury, FDIC, and the Federal Reserve Board said they provided emergency assistance to Citigroup because they were concerned that the failure of a firm of Citigroup’s size and interconnectedness would have had systemic implications. FRBNY agreed to lend against the residual value of approximately $300 billion of Citigroup assets if losses on these assets exceeded certain thresholds. On the basis of analyses by the various parties and an outside vendor, FRBNY determined that it would be unlikely that losses on the Citigroup “ring-fence” assets would reach the amount at which FRBNY would be obligated to provide a loan. At Citigroup’s request, Treasury, FDIC, and FRBNY agreed to terminate this loss sharing agreement in December 2009. As part of the termination agreement, Citigroup agreed to pay a $50 million termination fee to FRBNY. FRBNY never provided a loan to Citigroup under this lending commitment. On January 15, 2009, the Federal Reserve Board authorized FRBR to provide a lending commitment to Bank of America. As with Citigroup, the Federal Reserve Board authorized this assistance as part of a coordinated effort with Treasury and FDIC to assist an institution that the agencies determined to be systemically important. The circumstances surrounding the agencies’ decision to provide this arrangement for Bank of America, however, were somewhat different and were the subject of congressional hearings. While the Citigroup loss-sharing agreement emerged during a weekend over which the agencies attempted to avert an impending failure of the firm, the agencies’ discussions with Bank of America about a possible similar arrangement occurred over several weeks during which Bank of America was not facing imminent failure. According to Federal Reserve Board officials, possible assistance for Bank of America was first discussed in late December 2008 when Bank of America management raised concerns about the financial impact of completing the merger with Merrill Lynch, which was expected at the time to announce larger than anticipated losses (and did in fact announce these losses the following month). Following the January 1, 2009, completion of Bank of America’s acquisition of Merrill Lynch, the Federal Reserve Board and the other agencies agreed to provide a loss-sharing agreement on selected Merrill Lynch and Bank of America assets to assure markets that unusually large losses on these assets would not destabilize Bank of America. On September 21, 2009, the agencies and FRBR terminated the agreement in principle to enter into a loss sharing agreement with Bank of America. The agreement was never finalized, and FRBR never provided a loan to Bank of America under this lending commitment. As part of the agreement to terminate the agreement in principle, Bank of America paid $57 million to FRBR in compensation for out-of-pocket expenses incurred by FRBR and an amount equal to the commitment fees required by the agreement. On November 25, 2008, the FOMC announced that FRBNY would purchase up to $500 billion of agency mortgage-backed securities to support the housing market and the broader economy. The FOMC authorized the Agency MBS program under its authority to direct open market operations under section 14 of Federal Reserve Act. By purchasing MBS securities with longer maturities, the Agency MBS program was intended to lower long-term interest rates and to improve conditions in mortgage and other financial markets. The Agency MBS program commenced purchases on January 5, 2009, a little more than a month after the initial announcement. FRBNY staff noted that a key operational challenge for the program was its size. FRBNY hired external investment managers to provide execution support and advisory services needed to help execute purchases on such a large scale. In March 2009, the FOMC increased the total amount of planned purchases from $500 billion to up to $1.25 trillion. The program executed its final purchases in March 2010 and settlement was completed in August 2010. On several occasions, the Federal Reserve Board authorized extensions of its emergency loan programs, and most of these programs closed on February 1, 2010. For example, AMLF, PDCF, and TSLF were extended three times. The Federal Reserve Board cited continuing disruptions in financial markets in announcing each of these extensions. Table 6 provides a summary of the extensions for the emergency programs. We conducted a brief Web-based survey of all Federal Reserve Bank (FRB) directors that served in 2010. The purpose of this survey was to gather basic information from FRB directors to fulfill GAO’s congressional mandate to assess Federal Reserve Bank governance. Specifically, the survey asked about each director’s (1) educational and professional background; (2) roles and responsibilities as a FRB director; and (3) opinions on FRB governance. The survey questions and summary results can be found below. We sent a survey to all 105 directors that served for the full year during 2010. We received completed surveys from 91 directors (87 percent response rate). The web-based survey was administered from April 4, 2011, to May 6, 2011. Directors were sent an e-mail invitation to complete the survey on a GAO web server using a unique username and password. Nonrespondents received a reminder e-mail from GAO to complete the survey. We also contacted the corporate secretaries at every bank and asked them to encourage their directors to participate in the survey. Even though we received responses from a majority of directors in all 12 banks, it is possible some bias may exist in certain survey responses if characteristics of respondents differed from those of nonrespondents in ways that affect the responses (e.g., if any knew of a potential conflict of interest at their bank they may or may not be less likely to respond to the survey). The practical difficulties of conducting any survey may introduce additional nonsampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the questionnaire with three directors in February and March 2011. We conducted pretests to make sure that the questions were clear and unbiased and that the questionnaire did not place an undue burden on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. All data analysis programs were independently verified for accuracy. We are interested in learning about the breadth of experience that Federal Reserve Bank directors bring to their positions on the Board. 1. How many years have you served as a Federal Reserve Bank head office director? 2. Educational Background of Federal Reserve Bank directors. Degrees Associate’s degree (for example: AA, AS) Bachelor’s degree (for example: BA,BS) At least one Advanced Degree (Master’s, Professional, or Doctorate) Professional degree (for example: MD, DDS, JD) Doctorate (for example: PhD, EdD) 3. Work experience of Federal Reserve Bank directors. Mining, Quarrying, and Oil and Gas Extraction Information (Publishing, Broadcasting, and Telecommunications) Financial Services (directors who selected at least one of the following five categories) Credit Intermediation and Related Activities Securities, Commodity Contracts, and Other Financial Investments and Related Activities Insurance Carriers and Related Activities Funds, Trusts, and Other Financial Vehicles Offices of bank or other holding companies/ Corporate, Subsidiary, and Regional Managing Offices Real Estate and Rental and Leasing Professional, Scientific, and Technical Services (Legal, accounting, consulting, design, advertising, and public relations services) 4. Do you currently serve on any other boards (i.e., nonprofit, private or public company boards)? 5. Has someone from your current employer served as a Federal Reserve Bank (FRB) board director in the past 10 years? We are interested in learning about your duties as a Federal Reserve Bank director. 6. As a FRB Director, which of the following do you primarily represent? (check only one box) Seven directors provided an open-ended response to describe who they represent. Four directors indicated that their constituencies included the public, their business or industry, and other businesses or industries in the district. The other three directors listed food manufacturing and private equity, labor, transportation, communications, construction, and the public sector, and civic leadership and the nonprofit sector as the industries that they represent. 7. The three principal functions of FRB directors are listed below. Within each of these principal functions, which activities have you been involved in at your FRB? (check one box per question) 8. How frequently do you communicate with the following Reserve Bank personnel while carrying out your official duties? (check one box per person) In the open-ended question that asked directors to specify what “other” FRB staff with whom they interacted, directors listed the following staff members: assistants to senior management, executive vice president of operations, vice president of Information Technology, assistant general auditor, librarian, Federal Reserve Information Technology officers, members of the Federal Reserve Board, presidents of other Reserve Banks, other staff as questions arise, and vice president of Human Resources/Diversity. 9. In the past year, in your role as a director, have you been involved in any Division of Supervision and Regulation matters in which you did any of the following: (check one box per question) Supervision and Regulation Activities Were involved in making decisions about specific banks that the FRB supervises? Received general information about the supervisory status of banks in the district? Received supervisory information about the status of any specific banks? Were involved in making personnel decisions about pay or promotion for employees in the Division of Supervision and Regulation? Were involved in making decisions about the budget for the Division of Supervision and Regulation? Had other involvement with the Division of Supervision and Regulation not described above? GAO asked directors who answered “yes” to any of these questions to explain their answer. We analyzed the open ended answers for this question and no improper conflicts of interest were identified. 10. The following questions are about the FRB’s code or standards of conduct for directors (code). Did you do any of the following? (check one box per question) Standard of Conduct Receive training on the code at your FRB at the beginning of your term in office? Receive training on the code in Washington, D.C. at the beginning of your term in office? Sign an oath of office at the beginning of your term agreeing to adhere to the code? Receive an annual briefing on the code of conduct by a member of the bank’s senior management? Sign an annual certification agreeing to adhere to the code? GAO asked directors who answered “no” to any of the above questions to provide an explanation. Four directors stated they were unable to attend the training in Washington, D.C. Two directors said they attended the training but did not receive training on the code of conduct and two other directors said they did not recall if they signed an annual certification. 11. Are you aware of any past or current conflicts of interest with any FRB directors in your district? GAO asked directors who responded “yes” to this question to explain the conflict and how it was resolved. Five directors provided responses to this open-ended question on the survey. Two of the responses described actual or potential conflicts of interest involving procurement matters and the directors recused themselves from voting on the matter. One of those directors also noted that the CEO of Lehman Brothers, Inc., resigned as a director because the company was requesting assistance from the FRB. Another described a director who resigned because he expressed a desire to be involved in a political campaign. One director declined a board position at another entity because of perceived conflicts of interest. Another director noted that the board was apprised of a potential conflict of interest between a branch director and FRB auditors, and that the situation was resolved and reported to the Audit Committee. We are interested in learning about your views on how, if at all, Federal Reserve Bank governance practices could be strengthened. 12. In terms of Federal Reserve Bank governance, how would you strengthen achievement in the following areas, if at all? Please include examples of practices in your district or from other relevant board experience that may assist the Federal Reserve System in strengthening achievement in the following areas. a. Improve public representation on FRB Boards? b. Eliminate actual or potential conflicts of interest of Reserve Bank directors? c. Increase the availability of information useful for the formation and execution of monetary policy? d. Increase the effectiveness or efficiency of reserve banks? The open-ended responses were analyzed and included as examples in the report when appropriate. The Reserve Bank boards use committees to help oversee the operations of the Reserve Banks and their branches. The Federal Reserve Board requires all Reserve Banks to have standing audit committees and as needed search committees for the selection and appointment of a president. The Reserve Banks use various other committees, including budget and governance committees. Total Assets as of 12/31/10 (Dollars in thousands) In addition to the contact named above, Karen Tremba (Assistant Director), Sonja Bensen, Kathleen Boggs, Tania Calhoun, Emily Chalmers, Helen Culbertson, Rachel DeMarcus, Heather Hampton, Grace Haskins, Camille Keith, Jill Lacey, Marc Molino, Rubin Montes de Oca, and Andrew Stavisky made significant contributions to this report. | Events surrounding the 2007 financial crisis raised questions about the governance of the 12 Federal Reserve Banks (Reserve Banks), particularly the boards of directors' roles in activities related to supervision and regulation. The Dodd-Frank Wall Street Reform and Consumer Protection Act required GAO to review the governance of the Reserve Banks. This report (1) analyzes the level of diversity on the boards of directors and assesses the extent to which the process of identifying possible directors and appointing them results in diversity on the boards, (2) evaluates the effectiveness of policies and practices for identifying and managing conflicts of interest for Reserve Bank directors, and (3) compares Reserve Bank governance practices with the practices of selected organizations. The Federal Reserve Act requires each Reserve Bank to be governed by a nine-member board--three Class A directors elected by member banks to represent their interests, three Class B directors elected by member banks to represent the public, and three Class C directors that are appointed by the Federal Reserve Board to represent the public. The diversity of Reserve Bank boards was limited from 2006 to 2010. For example, in 2006 minorities accounted for 13 of 108 director positions, and in 2010 they accounted for 15 of 108 director positions. Specifically, in 2010 Reserve Bank directors included 78 white men, 15 white women, 12 minority men, and 3 minority women. According to the Federal Reserve Act, Class B and C directors are to be elected with due but not exclusive consideration to the interests of agriculture, commerce, industry, services, labor, and consumer representation. During this period, labor and consumer groups had less representation than other industries. In 2010, 56 of the 91 directors that responded to GAO's survey had financial markets experience. Reserve Banks generally review the current demographics of their boards and use a combination of personal networking and community outreach efforts to identify potential candidates for directors. Reserve Bank officials said that they generally limit their director search efforts to senior executives. GAO's analysis of Equal Employment Opportunity Commission data found that diversity among senior executives is generally limited. While some Reserve Banks recruit more broadly, GAO recommends that the Federal Reserve Board encourage all Reserve Banks to consider ways to help enhance the economic and demographic diversity of perspectives on the boards, including by broadening their potential candidate pool. The Federal Reserve System mitigates and manages the actual and potential conflicts of interest by, among other things, defining the directors' roles and responsibilities, monitoring adherence to conflict-of-interest policies, and establishing internal controls to identify and manage potential conflicts. Reserve Bank directors are often affiliated with a variety of financial firms, nonprofits, and private and public companies. As the financial services industry evolves, more companies are becoming involved in financial services or interconnected with financial institutions. As a result, directors of all three classes can have ties to the financial sector. While these relationships may not give rise to actual conflicts of interest, they can create the appearance of a conflict as illustrated by the participation of director-affiliated institutions in the Federal Reserve System's emergency programs. To increase transparency, GAO recommends that all Reserve Banks clearly document the directors' role in supervision and regulation activities in their bylaws. One option for addressing directors' conflicts of interest is for the Reserve Bank to request a waiver from the Federal Reserve Board, which, according to officials, is rare. Most Reserve Banks do not have a process for formally requesting such waivers. To strengthen governance practices and increase transparency, GAO recommends that the Reserve Banks develop and document a process for requesting conflict waivers for directors. The Federal Reserve System's governance practices are generally similar to those of selected central banks and comparable institutions such as bank holding companies and have similar selection procedures for directors. However, Reserve Bank governance practices tend to be less transparent than those of these institutions. For instance, comparable organizations make information on their board committees and ethics policies available on their websites; most Reserve banks do not. |
The major goals of medical tort laws are to (1) deter poor quality health care, (2) compensate the victims of negligent acts, and (3) penalize negligent providers. The system operates under the assumption that negligent behavior can be controlled and corrected by the hospitals and physicians themselves. It relies primarily on deterrence due to the threat of liability and disciplinary action. While this report focuses on the cost of medical liability borne by hospitals and physicians, the deterrence threat of tort law may lower costs incurred by consumers by reducing the number and severity of negligent medical acts. (See appendix I for a discussion of the legal basis for medical liability actions.) At least two factors have prompted calls for medical liability reform. First, some research suggests that the medical tort system is not achieving its goals. For example, one study reported that only a fraction of malpractice injuries result in claims, compensation is often unrelated to the existence of medical negligence, the legal system is slow at resolving claims, and legal fees and administrative costs consume almost half of the compensation. The second factor is the perception among some hospital officials and physicians that the current tort system places an unreasonable burden on their industry. Officials from the American Hospital Association and the American Medical Association contend that liability-related costs are too high and unduly influence the way hospitals deliver services and physicians practice medicine. The Congress has before it a number of legislative proposals that are intended to directly and indirectly reduce tort liability in the health care industry. To identify the various types of medical liability costs, we interviewed and collected data from a variety of sources, including the American Hospital Association, the American Medical Association, the American Bar Association, the St. Paul Fire and Marine Insurance Company, and individual hospitals and hospital systems. In addition, we reviewed recent professional and academic journals, such as the Journal of the American Medical Association and Health Affairs. From our research, we identified three studies that estimate certain hospital and physician medical liability costs. These studies were prepared by the General Accounting Office (GAO), the Congressional Budget Office (CBO), and the Office of Technology Assessment (OTA). We reviewed these studies to determine whether their estimates included all types of medical liability costs. In addition, we examined other studies that (1) estimated components of medical liability costs not included in these three studies or (2) used different methodologies to arrive at their estimates. We cannot project costs or generalize our findings because we did not use statistical methods to select the sources of the liability cost data we collected and did not collect data associated with all four categories of liability costs we identified. Also, because our work often involved data that some sources regarded as proprietary or sensitive, we agreed not to identify some sources in examples cited in our report. We did not verify the accuracy of the data. We performed our review from January 1995 through April 1995 in accordance with generally accepted government auditing standards. We discussed a draft of our report with CBO and OTA officials and have incorporated their comments where appropriate. Malpractice insurance is the first category of medical liability costs we identified and the cost specifically measured by each of the three studies. Most physicians and hospitals purchase medical malpractice insurance to protect themselves from medical malpractice claims. In most cases, the insurer will pay any claims up to a specific limit of coverage during a fixed period in return for a fee. The insurer investigates the claim and defends the physician or hospital. While hospital and physician insurance contracts can vary greatly, we have included the following types of costs in the medical malpractice insurance cost category: premiums for purchased insurance, hospital contributions for self-insurance, and payments made from hospitals’ general revenues and reserves and physicians’ personal assets to cover uninsured malpractice losses. (See appendix II for a detailed discussion of the types of hospital and physician insurance policies and related costs.) The CBO and OTA studies estimated costs primarily associated with purchased insurance. The CBO study reported the cost of purchased insurance in 1990, which totaled $5 billion and represented 0.74 percent of national health care expenditures. The OTA study measured purchased insurance and self-insurance costs in 1991 and reported that purchased insurance totaled $4.86 billion in 1991, or 0.66 percent of national health care expenditures. The study estimated self-insurance costs at 20 percent to 30 percent of premiums, which would mean that purchased insurance and self-insurance amounted to between $5.8 billion and $6.3 billion in 1991, less than 1 percent of national health care expenditures. Other studies that measured purchased insurance and self-insurance for the same periods studied by CBO and OTA estimated costs to be higher. Tillinghast, an actuarial and consulting firm, used its internal database of state-by-state malpractice insurance costs rather than insurance industry data because those data do not include self-insurance. Tillinghast estimated malpractice insurance costs in 1990 at over $8.2 billion.Another consulting firm, Lewin-VHI, Inc., used an estimate that malpractice insurance other than that purchased represents 86 percent of purchased insurance. This firm estimated malpractice insurance costs at $9.2 billion in 1991. Table 1 summarizes the estimates of malpractice insurance costs in 1990 and 1991. Our mid-1980s study measured all elements in our malpractice insurance cost category. To obtain information on hospital malpractice insurance costs, we analyzed data from a randomly selected sample of 1,248 hospitals. We obtained physician malpractice expense data from (1) American Medical Association reports quantifying expenses incurred by every known self-employed physician in the United States and (2) information collected from leading physician malpractice insurance companies. We reported that malpractice insurance costs for self-employed physicians averaged 9 percent of their total professional expenses in 1984, while malpractice insurance costs for hospitals accounted for 1 percent of their average inpatient per-day expense in 1985. Insurance company officials stated that the insurance market has changed since 1985 as more hospitals have established self-insurance programs and increased their self-insurance limits, thereby reducing their reliance on purchased insurance. However, the impact of this trend on costs has not been measured. Physician malpractice insurance costs vary by state and can vary within a state. Figure 1 presents The St. Paul Fire and Marine Insurance Company’s 1994 rates for mid-range liability risk physician mature claims-made policies with limits primarily at $1 million/$3 million. In certain states, lower limits are mandatory or more common due to patient compensation funds. Variations by state and within states generally reflect the insurance company’s claims and loss experience. Table 2 presents the rates the company provided for selected metropolitan areas that have rating territories separate from the remainder of their respective states. Across all rating territories, the annual premium for $1 million/$3 million coverage under claims-made policies ranged from a low of $5,388 in Arkansas to a high of $48,718 in Chicago. Within each rating territory, physicians’ malpractice insurance costs also vary by specialty. For example, one insurer’s average 1993 mature claims-made rates for policies providing $1 million/$3 million coverage limits to physicians in Texas ranged from $7,410 (except $9,877 in Houston) for family practitioners performing no surgery, a low-risk practice, to $54,834 (except $73,089 in Houston) for physicians specializing in obstetrics and gynecology, a high-risk specialty. While malpractice insurance rates are generally insensitive to a physician’s malpractice history, a physician’s malpractice claims history can lead to denial or termination of coverage. Hospital malpractice insurance costs vary according to claim trends in the state where the hospital is located, the number of occupied beds and outpatient visits, the limits of liability selected, the types of procedures performed, and the number of years the hospital has been insured under claims-made coverage. Malpractice insurance rates for hospitals are also frequently based on the malpractice loss experience (in terms of the number of claims filed and the amount per paid claim) of the individual hospital. Figure 2 presents The St. Paul Fire and Marine Insurance Company’s per-bed average acute care rates for mature claims-made coverage at $1 million/$3 million limits of liability except in states where lower limits are mandatory or in states with patient compensation funds. Table 3 presents The St. Paul Fire and Marine Insurance Company’s per-bed average acute care rates for hospitals in selected metropolitan areas that have rating territories separate from the remainder of their respective states. The annual per-bed rates ranged from a low of $612 in South Dakota to a high of $7,734 in Detroit. Defensive medicine includes the following hospital and physician actions aimed at reducing the risk of medical malpractice claims: additional or more complex diagnostic tests and procedures and additional patient visits and time spent with patients. The costs of defensive medicine cannot be easily estimated because of difficulties in defining it and distinguishing it from clinically justified medical care. For example, if the definition includes only conscious defensive medicine, it could exclude defensive medical practices acquired during medical training. Thus, the definition would need to address the question of the physician’s motive for performing tests: Should cost estimates for defensive medicine encompass only procedures performed for “purely” defensive purposes or should they also include procedures performed for “primarily” defensive purposes? Cost estimates would vary greatly depending upon the definition used. Also, it is difficult to segregate the costs of those defensive acts that produce little or no medical benefit from those that are medically justified, such as additional tests that rule out certain diagnoses. Defensive medical practices can be classified as positive and negative. Positive defensive medicine involves tests and treatment that would not be provided if the threat of being sued were not present. For example, physicians may order more tests or procedures, take more time to explain risks or treatment options, and spend more time maintaining patient records than they would if there were no threat of malpractice suits. Negative defensive medicine involves not performing services because of the risk of malpractice actions. For example, physicians may restrict the scope of their practices to low-risk patients or procedures. While positive defensive medicine drives up the cost of health care, negative defensive medicine reduces its availability. The following discussion is limited to positive defensive medicine. Certain physician specialists may practice more defensive medicine than others. Defensive medicine is generally considered to be more extensive in surgery, radiology, cardiology, emergency medicine, and obstetrics and gynecology. As we previously reported, in 1990 Maine imposed practice guidelines by law that state officials expect will decrease these specialists’ motivation to practice defensive medicine. These practice guidelines are intended to reduce the number of diagnostic tests and procedures that are performed for defensive purposes, including preoperative tests, such as some electrocardiograms and chest x-rays, cervical spine x-rays for some emergency room patients, some breast biopsies, and some colonoscopies. High rates of caesarean section are also cited as evidence of defensive medicine. According to the results of our earlier review, the hospitals we visited analyzed their physicians’ practice patterns in an effort to reduce costs. In some cases, the hospitals found that some physicians provided a significant amount of unnecessary or excessively sophisticated services but could not determine whether the provision of these services represents defensive medicine. For example, one hospital we visited reviewed its physicians’ use of low osmolality contrast agents in its cardiac catheterization lab. Among health care professionals, the widespread use of low osmolality contrast agents is often viewed as a function of defensive medicine. Physicians use the low osmolality agents because high osmolality contrast agents have been associated with mild to moderate adverse reactions, such as nausea and vomiting, as well as more serious adverse reactions. The average cost of the low osmolality agent used in that hospital was $146.10, compared to $6.96 for the high osmolality agent, and represented 95 percent of the contrast media used in its cardiac catheterization laboratory. Because numerous research articles have suggested that the incidence of adverse effects were easily manageable and did not result in increased medical costs, the hospital limited the use of low osmolality agents to the approximately 30 percent of patients considered to be at high risk. Because the hospital performs 5,000 procedures in its cardiac catheterization laboratory annually, it projects yearly savings of over $400,000. While hospital officials provided no conclusive evidence linking the unnecessary costs to defensive medicine, they stated that the physicians’ desire to avoid adverse effects had prompted their use of the low osmolality contrast agent. Neither our 1986 report nor the OTA study estimated the cost of defensive medicine. We reported that the cost of defensive medicine is impossible to quantify with any degree of confidence because of the difficulty in isolating defensive practices from medical care provided for clinical reasons. The OTA study, like our study, cited the difficulty in measuring the cost of defensive medicine and did not provide an estimate. The CBO study concluded that defensive medicine is probably not a major factor in the cost of medical care and did not provide an estimate. In a separate study, OTA reported that it found evidence that defensive medicine exists, estimating that as much as 8 percent of diagnostic procedures result primarily from physicians’ conscious concern about professional liability. The strongest evidence found by OTA was produced in a study of caesarean deliveries in New York State. That study reported that obstetricians who practice in hospitals with high malpractice claim frequency and premiums do more caesarean deliveries than obstetricians practicing in areas with low malpractice claim frequency and premiums. However, OTA also reported that it does not know whether the report’s findings for obstetricians and caesarean deliveries can be generalized to other states, specialties, clinical situations, or procedures. OTA concluded that it is virtually impossible to accurately measure the overall level and national cost of defensive medicine because of the methodological problems associated with isolating defensive medical practices. Through our research, we identified two studies that attempted to quantify the total cost of defensive medicine. An American Medical Association study estimated that in 1984, defensive medical costs were between $9 billion and $10.6 billion for primarily defensive medicine purposes.The $10.6 billion estimate is based on the results of a physician survey, which may not accurately reflect the cost of defensive medicine. The $9 billion estimate assumes a statistical correlation between an increase in physician fees and higher malpractice costs. This method might overstate the costs of defensive medicine because increases in fees might result from many factors besides physicians’ defensive medical practices. A second study, prepared by Lewin-VHI, Inc., estimated hospital and physician defensive medicine costs at between $4.2 billion and $12.7 billion in 1991. This estimate is based primarily on the earlier AMA estimates and is subject to the same methodological limitations. This third category of medical liability costs we identified includes certain risk management activities, time and travel associated with litigation, and creating and maintaining records subject to discovery or required for defense. Our study and the CBO and OTA studies did not attempt to provide a measure of liability-related administrative costs. Nor did we identify, during the course of our research and discussions, other studies that estimated hospital and physician liability-related administrative costs. Hospital risk management activities are designed to (1) reduce the hospital’s and its physicians’ risk of malpractice suits by maintaining or improving the quality of care, (2) reduce the probability of a claim being filed by negotiating compensation with an injured patient prior to the patient filing a claim, and (3) preserve the hospital’s assets once a claim has been filed. Risk management was first applied to health care facilities during the 1970s when jury awards and settlements increased sharply. During this period, many insurance companies either substantially increased hospitals’ premiums or stopped writing malpractice insurance for them. Many hospitals intensified their risk management activities in the 1980s when an increasing number became at risk for malpractice losses as they began to self insure for smaller damage awards and settlements. While hospitals perform some risk management activities specifically to reduce liability-related costs, they do not segregate the costs of these activities from the cost of practices designed to promote quality assurance or to satisfy accreditation standards. For example, occurrence screening systems—which are designed to identify deviations from normal procedures or expected treatment outcomes—involve costs associated with both promoting quality and reducing liability risk. By contrast, claims management is an example of a purely liability-related risk management cost. Claims management activities include claims investigation, claims filing, damage evaluation and reserve determination, planning remedial medical care, settlement strategy formulation, settlement structuring, and negotiating and “posturing” for defense or settlement. Hospital officials and physicians also identified time spent at trials and other litigation-related events as liability-related administrative activities. As with liability-related risk management activities, hospitals and physicians did not routinely account for these activities separately. Examples of these activities include time and travel expenses associated with answering interrogatories and depositions. For instance, if a nurse is a defendant, the hospital will pay the nurse’s expenses and salary while he or she prepares for and attends trial. The hospital would also incur additional costs contracting with a temporary nurse agency or using its supplemental nurse pool to perform the duties of the defendant nurse. Similarly, a defendant physician would have to contract with another physician to care for patients during litigation. Hospital officials also reported incurring additional liability-related administrative expenses associated with creating and maintaining records that may be required for defense. Such records would include detailed staffing schedules and precisely worded training, policy, and procedures manuals. Hospitals archive these records for decades since they may be needed for litigation long after an alleged negligent act. In some cases, hospitals spend considerable time locating physicians and other staff when malpractice actions involve events that occurred in the distant past, such as a law suit filed years after the birth of a child. Hospitals and physicians incur the following types of medical device and pharmaceutical liability costs in the prices that they pay for their products: manufacturers’ liability insurance and costs associated with product design and marketing that would not be incurred in the absence of the threat of suit. Neither our study nor the CBO or OTA studies estimated manufacturers’ medical device and pharmaceutical liability costs incurred in the purchase price hospitals and physicians pay for their products. During our research and discussions with industry officials, we did not identify other studies that estimated the liability costs passed on to hospitals and physicians in the prices of medical devices and pharmaceuticals. Medical device and pharmaceutical industry officials and others we spoke with expressed concern about liability costs associated with medical products. They believe that litigation involving medical products is extensive and increasing. Because state product liability laws differ and most manufacturers sell products in many states, manufacturers are at risk of simultaneous suits in numerous jurisdictions with different legal standards. They also stated that drugs intended for chronic conditions or devices remaining in the body indefinitely may be used by patients for periods longer than the products were tested in clinical trials. As a result, problems may not be discovered until decades after use, when many patients may be using the product. Because only claims-made insurance is generally available for medical products, manufacturers with such coverage are not insured for suits in future years. When suits appear, the insurer can refuse to renew the policy, leaving the manufacturer without insurance. Medical device and pharmaceutical industry officials told us that this legal environment drives up the cost of medical products. Manufacturers pass on their liability costs to hospitals and physicians in their products’ prices. Their liability costs include insurance and liability-related production and marketing costs. Manufacturer insurance costs, like those of hospitals, can include periodic self-insurance payments, payments made for purchased insurance, and payments made from general revenues to cover uninsured losses. Liability-related production and marketing costs include expenses associated with actions taken primarily to protect the manufacturer from liability, such as multiple layers of packaging and repeated safety warnings. Certain medical devices and pharmaceuticals involve a greater degree of liability risk than others. For example, stethoscopes pose little threat of liability risk. However, implanted devices such as heart valves, intrauterine devices, and breast implants have been involved in the most prominent medical device suits. Likewise, some pharmaceuticals like generic drugs and nonprescription drugs generally involve little risk of liability action. Most pharmaceutical litigation has involved brand name prescription drugs, such as Bendectin. While some medical device and pharmaceutical cases and settlements have been widely publicized, such as those involving silicon breast implants and the Dalcon shield, little information is now available on the prevalence of litigation throughout the industry or the magnitude of the costs passed on to hospitals and physicians. Industry and insurance company officials stated that out of court settlements are common, and manufacturers are reluctant to disclose settlement terms for fear of encouraging new suits or inflating future claims. Manufacturers are also reluctant to disclose their pricing strategies because of competition. Hospitals and physicians incur a variety of medical liability costs. Studies attempting to measure such costs have focused on the cost of purchased malpractice insurance, which is readily quantifiable due to state reporting requirements. Other hospital and physician liability costs, however, are impractical, if not methodologically difficult to measure with any precision. Such costs include defensive medicine, liability-related administrative expenses, and medical device and pharmaceutical manufacturers’ liability expenses that they pass on to hospitals and physicians in the prices of their products. However, a broader understanding of such costs and their implications is useful to the ongoing medical liability reform debate. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies to the Ranking Minority Member of the House Committee on Ways and Means and to other interested Members of the Congress. Copies of this report will also be available to interested parties upon request. Please contact me at (202) 512-9542 if you or your staff have any questions concerning this report. Major contributors are listed in appendix III. Generally, medical malpractice suits are based on tort law. Plaintiffs select tort theory instead of alternatives, such as breach of contract, because they may recover larger damages and because the statute of limitations generally runs from the date the harm was discovered rather than the date the alleged malpractice occurred. When a third party such as a surviving spouse or parent brings suit, it generally must select tort theory because the plaintiff is neither a party to the original contract nor a third party beneficiary. Figure I.1 summarizes the types of malpractice action filed against physicians insured by The St. Paul Fire and Marine Insurance Company during the 5-year period from 1989 through 1993. According to The St. Paul Fire and Marine Insurance Company, failure to diagnose was the most common malpractice claim—28 percent of all claims—filed against physicians it insured during the 5-year period spanning 1989 through 1993. Failure to diagnose cancer was the most common claim in this category. Other frequent failure to diagnose claims involved fractures and dislocations, infections, myocardial infarctions, and pregnancy problems. Claims stemming from surgical procedures constituted the next largest category, 27 percent of all claims. The most frequent malpractice claim related to surgery was “postoperative complication.” Inadvertent surgical acts and inappropriate or unnecessary surgeries also were frequent allegations in this category. Claims alleging improper treatment represented the third largest category, making up 26 percent of all claims during the period. Most of these claims were birth-related. Other claims made up the final category, including adverse reaction to anesthesia, injection site injuries, and lack of informed consent. In addition to asserting physician negligence, plaintiffs may file malpractice claims against hospitals where treatment was provided through the vicarious liability doctrine or by establishing hospital corporate negligence in areas such as the selection and review of medical staff. In some jurisdictions, hospitals can be jointly and severally liable, which enables plaintiffs to recover most or all damages from a hospital even when the hospital was only partially responsible for the negligent act. Plaintiffs can also file claims against medical device and pharmaceutical manufacturers under various legal theories, such as negligence, strict liability, and breach of warranty. Manufacturers are liable for negligence if they did not exercise due care and this lack of care caused injury. Manufacturers are liable under strict liability if their products are defective, making the products unreasonably dangerous and causing the injury. The three types of defects for which manufacturers can be found to be strictly liable are (1) a flaw in the product introduced in the manufacturing process (manufacturing defect), (2) a defect in the design of the product (design defect), and (3) a failure to adequately warn consumers of risks or give instructions regarding product use (warning defect). Under breach of warranty, manufacturers are liable if the product fails to work as expressly or implicitly warranted or promised. Hospital and physician insurance coverage and costs can vary greatly. This appendix briefly discusses types of insurance and factors that can affect their costs. Several factors influence the cost of purchased malpractice insurance. The number of claims and the average cost per claim are the primary factors. However, within the prevailing legal environment, hospitals and physicians can reduce the cost of their premiums by purchasing insurance policies with characteristics that allow them to retain risk or to defer costs to future years. One malpractice policy characteristic that influences the cost of insurance is the amount of coverage provided. Typically, medical malpractice insurance policies have a dollar limit on the amount that the insurance company will pay on each claim against the hospital or physician (per occurrence limit) and a dollar limit for all claims against the insured (aggregate limit) for the policy period. For example, limits coverage of $1 million/$3 million means that the insurer will pay up to $1 million on a single claim and up to $3 million for all claims during the policy period. The higher the limits, the more costly the policy. However, since small claims occur more frequently then large ones, the cost per dollar of coverage decreases as the coverage limits increase. A deductible provision can also influence the cost of purchased insurance. Under a policy with a deductible provision, an insurer is liable only for losses in excess of a stated amount up to the policy limits. For example, if a hospital incurred a $300,000 malpractice loss while insured under a $1 million per occurrence policy with a $100,000 deductible, the hospital would pay $100,000 of the loss and the insurer would pay $200,000. Generally, the higher the deductible, the lower the premium. The type of policy purchased can also influence the cost of medical malpractice insurance. Generally, malpractice insurance is written on either an occurrence or a claims-made basis. An occurrence policy covers malpractice events that occurred during the policy period, regardless of the date of discovery or when the claim may be filed. A claims-made policy covers malpractice events that occurred after the effective date of the coverage and for which claims are made during the policy period. Because the risk exposure to the insurer is lower, premiums for claims-made policies are generally lower during the first year (approximately 25 percent of occurrence policies) but increase to approximate the occurrence basis after about 5 years when they mature. To cover claims filed after a claims-made policy has expired—when, for example, a hospital changes insurers or after a physician retires, the hospital or physician must purchase insurance known as “tail coverage,” which insurance company officials stated can cost between 100 percent and 200 percent of the last claims-made policy cost. To minimize the cost of purchased malpractice insurance, most medium-size and large hospitals self-insure for smaller settlements and damage awards. In many cases, these hospitals establish self-insurance trusts that they administer themselves or contract with third parties to administer. Self-insuring hospitals make periodic contributions to these trusts to pay for losses as defined under formal trust agreements. Generally, the contribution amounts are generally actuarily determined based upon the estimated present value of future indemnity payments and expenses. Indemnity payments include amounts that the trusts will pay claimants as a result of settlements and damage awards. Expenses include defense attorneys, medical experts, private investigators, court reporters for depositions, and court costs. Most self-insuring hospitals purchase “excess” insurance to cover that portion of large losses that exceeds their self-insurance limits. Whereas self-insurance coverage typically pays settlements or damage awards up to a few million dollars, excess coverage pays up to tens of millions of dollars above the self-insurance coverage limits. Some hospitals obtain an additional layer of coverage above their excess layer, often referred to as “blue sky” coverage, which pays that portion of settlements or damage awards exceeding the excess coverage limit up to $100 million. Generally, the higher the limits, the more costly the insurance. However, the cost per dollar coverage decreases as the limits increase. Like purchased insurance, hospital self-insurance costs are determined by the expected number and severity of claims. However, other factors can influence self-insurance costs. Costs can vary over time because estimated future losses may differ from actual losses. If the hospital incurs fewer losses than expected, the resulting surplus will enable the hospital to reduce trust contributions. If the hospital incurs more losses than expected, the resulting deficit will force the hospital to increase trust contributions. Costs can also vary over time if estimated trust investment income differs from actual investment income. If trust investments return a higher or lower yield than expected, hospitals may be able to lower, or may be required to raise, trust contributions accordingly. In addition to self-insurance and purchased insurance, hospitals and physicians can also incur malpractice liability costs associated with uninsured losses. The most common uninsured loss involves deductibles paid by hospitals and physicians that have purchased primary coverage. Hospitals and physicians are also at risk for losses that exceed the limits of coverage. Hospitals and physicians can also incur losses associated with causes of action not covered by policies. Russell E. Hand, Auditor-in-Charge Elaine Coleman, Evaluator Claudine Makofsky, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the types of medical liability costs that affect hospitals and physicians, and whether existing studies include these costs in their estimates of hospital and physician liability expenses. GAO found that: (1) in general, hospitals' and physicians' medical liability costs account for about 1 percent of national health care expenditures; (2) estimates of malpractice premiums do not take into account direct and indirect liability costs other than some self-insurance costs; (3) these nonpremium liability costs include self-insurance costs and uninsured losses, defensive medical costs, liability-related administrative costs, and medical device and pharmaceutical liability costs; (4) it is difficult to quantify hospitals' and physicians' nonpremium liability costs because data on these costs are not usually collected, defensive medical practices are not clearly defined and are hard to distinguish from reasonable care, and administrative costs intended to minimize medical liability are included in efforts to improve service or adhere to accreditation standards; (5) medical device and pharmaceutical manufacuturers include in the cost of their products liability insurance and litigation costs and product design and marketing costs incurred to reduce the threat of law suits; and (6) it is difficult to obtain information on manufacturers' liability costs because of sealed court records and proprietary and competitive concerns. |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have revolutionized the way our government, our nation, and much of the world communicates and conducts business. Although this expansion has created many benefits for agencies such as IRS in achieving their missions and providing information to the public, it also exposes federal networks and systems to various threats. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risks to these systems are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. For example, the Office of Management and Budget cited a total of 12,198 incidents reported to the U.S. Computer Emergency Readiness Team (US-CERT) by federal agencies during fiscal year 2007, which is more than twice the number of incidents reported the prior year. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign nation states engaged in intelligence gathering and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT Coordination Center studied insider threats and stated in a May 2005 report that “insiders pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Our previous reports, and those by federal inspectors general, describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in force today. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing the nation’s tax laws, and relies extensively on computerized systems to support its financial and mission-related operations. IRS collected about $2.7 trillion in tax payments in fiscal years 2008 and 2007; processed hundreds of millions of tax and information returns; and paid about $426 billion and $292 billion, respectively, in refunds to taxpayers. Further, the size and complexity of IRS adds unique operational challenges. The agency employs tens of thousands of people in its Washington, D.C., headquarters, 10 service center campuses, 3 computing centers, and numerous other field offices throughout the United States. IRS also collects and maintains a significant amount of personal and financial information on each American taxpayer. The confidentiality of this sensitive information must be protected; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and information systems that support the agency and its operations. FISMA requires the Chief Information Officers (CIO) at federal agencies to be responsible for developing and maintaining an information security program. Within IRS, this responsibility is delegated to the Associate CIO for Cybersecurity. The Office of Cybersecurity is within the CIO’s Modernization and Information Technology Services (MITS) organization. The mission of MITS is to deliver information technology services and solutions that drive effective tax administration to ensure public confidence. MITS’s goals are to improve service, deliver modernization, increase value, and assure the security and resilience of IRS information systems and data. The Office of Cybersecurity is responsible for ensuring IRS’s compliance with federal laws, policies, and guidelines governing measures to assure the confidentiality, integrity, and availability of IRS electronic systems, services, and data. The Office of Cybersecurity is to manage IRS’s information security program in accordance with FISMA, including to perform assessments of risks; track compliance; identify, mitigate and monitor cybersecurity threats; determine strategy and priorities; and monitor security program implementation. In order for IRS organizations to carry out their respective responsibilities in information security, information security policies, guidelines, standards and procedures have been developed and published in the Internal Revenue Manual. Although IRS has continued to make progress toward correcting previously reported information security weaknesses at three data centers and an additional facility, many deficiencies remain. It has corrected or mitigated 49 of the 115 information security weaknesses that we reported as unresolved at the time of our last review. IRS corrected weaknesses related to access controls, including physical security, among others. For example, it has implemented controls for unauthenticated network access and user IDs on the mainframe; further limited access to its mainframe environment by limiting access to system management utility functions and mainframe console commands; taken several measures to protect information traversing its network, such as installing a secure communication service for encryption; taken steps to improve its auditing and monitoring capability by retaining audit logs of security-relevant events for its administrative accounting system and ensuring that audit logs were being created for such events on its procurement system; removed authority for unrestricted physical access to the computer room and tape library from individuals who did not need it to perform their job; improved controls over physical access proximity cards; enhanced periodic reviews of mainframe configurations; improved the disposal of removable media; improved patching of critical vulnerabilities, as well as the timeliness of applying patches at certain facilities; and updated contingency plans to document critical business processes. In addition, IRS has made progress in improving its information security program. For example, the agency completed an organizational realignment, including creation of the Associate CIO for Cybersecurity position, and has several initiatives under way that are designed to improve information security. IRS has developed and documented a detailed road map to guide its efforts in targeting critical weaknesses. Additionally, it is in the process of implementing a comprehensive plan to address numerous information security weaknesses, such as those associated with network and system access, audit trails, system software configuration, security roles and responsibilities, and contingency planning. These efforts are a positive step toward improving the agency’s overall information security posture. Although IRS has moved to correct previously identified security weaknesses, 66 out of 115 weaknesses—or about 57 percent—remained open or unmitigated at the time of our site visits (see fig. 1). Unmitigated deficiencies include those related to access controls, as well as other controls such as configuration management and personnel security. For example, IRS continues to, among other things, allow sensitive information, including user IDs and passwords for mission-critical applications, to be readily available to any user on IRS’s internal network; use passwords that are not complex enough to avoid being guessed or grant excessive electronic access to individuals; inconsistently apply patches; and not remove separated employees’ access in a timely manner for one of its systems. Such weaknesses increase the risk of compromise of critical IRS systems and information. According to IRS officials, they are continuing to address the uncorrected weaknesses, and subsequent to our site visits, they had completed corrective actions for some of the weaknesses. Although IRS has continued to make progress toward correcting previously reported information security weaknesses at its three data centers, as well as an additional facility, many deficiencies remain. These deficiencies include those related to access controls, as well as other controls such as configuration management and personnel security. A key reason for these weaknesses is that IRS has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. Furthermore, these weaknesses continue to jeopardize the confidentiality, integrity, and availability of IRS’s systems and contributed to IRS’s material weakness in information security during the fiscal year 2008 financial statement audit. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls potentially diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. IRS did not fully implement controls in the areas listed above, as the following sections in this report demonstrate. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication— such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. According to the Internal Revenue Manual, passwords should be protected from unauthorized disclosure and modification when stored and transmitted. The Internal Revenue Manual also requires IRS to enforce strong passwords for authentication (defined as a minimum of eight characters, containing at least one numeric or special character, and a mixture of at least one uppercase and one lowercase letter). Although IRS had implemented controls for identification and authentication, weaknesses continued to exist at two of the sites we visited. Specifically, usernames and passwords were still viewable on an IRS contractor-maintained Web site at one of its data centers. In addition, the agency continued to store passwords in scripts and did not enforce the use of strong passwords for systems at another data center. As a result, increased risk exists that an individual could view or guess these passwords and use them to gain unauthorized access to IRS systems. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those protected resources that they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to individual users or groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users’ access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. The Internal Revenue Manual requires that system access be assigned based on least privilege—allowing access at the minimum level necessary to support the user’s job duties. The Internal Revenue Manual also specifies that only individuals having a “need to know” in the performance of their duties should have access to sensitive information including that deemed as personally identifiable information. IRS permitted users more privileges on its systems than needed to perform their official duties. For example, IRS integrated network device controls with its Windows management controls that could provide users with excessive access to its network infrastructure. According to IRS officials, the agency made a cost-based decision to implement this configuration. In addition, IRS did not restrict access to sensitive personally identifiable information. To illustrate, the agency allowed authenticated users on its network access to shared drives containing taxpayer information, as well as performance appraisal information for IRS employees including their social security numbers. This information could allow someone to commit fraud or identity theft. In another example, the agency did not restrict access to tax data for a major corporation and allowed all employees with network access the potential to view this information. These excessive privileges could allow users unwarranted access to IRS’s network or enable them to access information not needed for their jobs and could place IRS systems or information at risk. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. IRS policy requires the use of encryption for transferring sensitive but unclassified information between IRS facilities. The National Security Agency also recommends disabling protocols that do not encrypt information transmitted across the network, such as user ID and password combinations. Although IRS had implemented controls to encrypt information traversing its network, it did not always ensure certain sensitive data was encrypted. For example, one data center has not yet disabled unencrypted protocol services for all its UNIX servers. Similarly, at another center, users’ login information is still being sent across the IRS internal network in clear text, potentially exposing account usernames and passwords. More importantly, IRS continues to transmit data, such as account and financial information, from its financial accounting system using an unencrypted protocol. By transmitting data unencrypted, IRS is at increased risk that an unauthorized individual could view sensitive information. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to know what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail, or logs of system activity, that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. IRS did not always effectively monitor its systems. For example, IRS had not configured security software controls to log changes to datasets that would support effective monitoring of the mainframe at one of its data centers. In addition, other weaknesses include inadequate logging of security-relevant events for UNIX and Windows servers at one data center and for UNIX servers at another. By not effectively logging changes to its systems, IRS will not have assurance that it will be able to detect unauthorized system changes that could adversely affect operations, or appropriately detect security-relevant events. Physical access controls are used to mitigate the risks to systems, buildings, and supporting infrastructure related to their physical environment and to control the entry and exit of personnel in buildings, as well as data centers containing agency resources. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, and locks. Without these protections, IRS computing facilities and resources could be exposed to espionage, sabotage, damage, and theft. The Internal Revenue Manual requires that all authorized visitors and their packages and briefcases be examined when entering an IRS facility. In addition, data center security checkpoint procedures require that officers specifically screen for cameras and other items that are prohibited from IRS facilities. The Internal Revenue Manual also states that the authorized access list into restricted areas will be prepared monthly and dated and signed by the branch chief, but not before the branch chief validates the need of individuals to access the restricted area. Although IRS had implemented numerous physical security controls, certain controls were not working as intended, and the agency had not fully implemented others. For example, security guards at one data center did not ensure that visitors and their possessions were properly screened when entering the facility. Our staff inadvertently included digital cameras in packed luggage. Despite screening the luggage with the magnetometer, the guards did not confront them about the prohibited items. In another example, IRS prepared access lists identifying personnel authorized to enter sensitive areas at two centers and at an additional facility; however, the branch chiefs at the three sites had not signed or dated the lists as required. This step is essential in verifying that employees continue to warrant access into restricted areas. As a result, increased risk exists that prohibited items and individuals may inappropriately be permitted access to IRS facilities and restricted areas. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems and implementing personnel security. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of IRS’s information and information systems. The purpose of configuration management is to establish and maintain the integrity of an organization’s work products. The Internal Revenue Manual states that IRS shall establish and maintain baseline configurations and inventories of organizational information systems and monitor and control any changes to the baseline configurations. Proactively managing vulnerabilities of systems will reduce or eliminate the potential for exploitation and involves considerably less time and effort than responding after an exploit has occurred. Patch management, a component of configuration management, is an important factor in mitigating software vulnerability risks. Patch installation can help diminish vulnerabilities associated with flaws in software code. Attackers often exploit these flaws to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations’ systems. The Internal Revenue Manual requires that all vendor-supplied security patches be installed on all IRS systems. IRS did not fully implement its policies for managing changes to its systems. Specifically, IRS did not maintain or enforce a baseline configuration for one data center’s mainframe system, which supports the revenue accounting system of record and other applications. In addition, IRS used an unsupported software package that was not current and thus vulnerable to attack. Specifically, certain IRS servers were running an outdated version of software that was no longer supported by the vendor and, therefore, could not be patched against a known vulnerability. As a result, IRS has limited assurance that system changes are being properly monitored and that its systems are protected against new vulnerabilities. The greatest harm or disruption to a system comes from the actions, both intentional and unintentional, of individuals. These intentional and unintentional actions can be reduced through the implementation of personnel security controls. According to the National Institute of Standards and Technology (NIST), personnel security controls help organizations ensure that individuals occupying positions of responsibility (including third-party service providers) are trustworthy and meet established security criteria for those positions. Organizations should also ensure that information and information systems are protected during and after personnel actions, such as terminations and transfers. More specifically, the Internal Revenue Manual requires that all accounts be deactivated within 1 week of an individual’s departure on friendly terms and immediately upon an individual’s departure on unfriendly terms. IRS did not always ensure that personnel security controls were fully implemented. For example, at three locations, IRS did not remove application access within 1 week of separation for 6 of 17 (35 percent) separated employees we reviewed. IRS also did not deactivate proximity cards immediately upon employee separation at one of its facilities. As a result, IRS is at an increased risk that individuals could gain unauthorized access to its resources. A key reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. IRS has made important progress in developing and documenting elements of its information security program. However, not all components of its program have been fully implemented. According to NIST, risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. Consistent with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed, as well as potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. IRS also requires system risk assessments be reviewed annually. Although IRS had implemented a risk assessment process, it did not always annually review its risk assessments. The risk assessments that we reviewed were current, documented residual risks assessed, as well as potential threats, and recommended corrective actions for mitigating or eliminating the vulnerabilities that were identified. However, two risk assessments for systems supporting tax processing and inventory control had not been reviewed annually, per IRS’s policy. As a result, potential risks to these systems and the adequacy of their management, operational, and technical controls to reduce risks may be unknown. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk associated with unauthorized access or disruption of services. Technical security standards can provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies are the important primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. IRS has developed and documented information security policies, standards, and guidelines that generally provide appropriate guidance to personnel responsible for securing information and information systems. This has included guidance for assessing risk, security planning, security training, testing and evaluating security controls, contingency planning, and guidance for operating system platforms. However, as illustrated by the weaknesses identified in this report, IRS has not yet fully implemented its policies, standards, and guidelines. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place or planned to meet those requirements. OMB Circular A- 130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. Furthermore, IRS policy requires that security plans be developed, documented, implemented, and periodically updated for the controls in place or planned for an information system. IRS had developed, documented, and updated the plans for eight systems we reviewed. Furthermore, those plans documented the management, operational, and technical controls in place and included information required per the OMB Circular A-130 for applications and general support systems. However, as illustrated by weaknesses identified in this report, IRS had not yet fully implemented all the controls documented in its security plans. People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing sufficient training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. IRS policy requires that personnel performing information technology security duties meet minimum continuing professional education hours in accordance with their roles. Personnel performing security roles are required by IRS to have 12, 8, or 4 hours of specialized training per year, depending on their specific role. IRS personnel performing information technology security duties met their minimum continuing professional education requirements. For the employees and contractors with specific security-related roles that we reviewed, 36 employees and contractors at one data center, and 24 employees and contractors at another, met the required minimum security awareness and specialized training hours. Another key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. IRS policy also requires periodic testing and evaluation of the effectiveness of information security policies and procedures. Although IRS had a process in place for testing and evaluating its systems, the process was not comprehensive. IRS had tested and evaluated information security controls for each of the eight systems we reviewed. However, its testing process did not identify certain weaknesses that we identified during our review. For example, IRS was not testing for complex passwords on its UNIX servers at one data center. Additionally, from an enterprisewide perspective, the agency had not identified inappropriate access to numerous shares containing sensitive information. Until IRS improves its testing of controls over its systems, it has reduced assurance that its policies and procedures are being followed and that controls for its systems are being effectively implemented and maintained. A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agency remedial action plans, also known as plans of action and milestones, to include the resources necessary to correct identified weaknesses. According to IRS policy, the agency should document weaknesses found during security assessments, as well as document only planned, implemented, and evaluated remedial actions to correct any deficiencies. The policy further requires that IRS track the status of resolution of all weaknesses and verify that each weakness is corrected. Although remedial action plans were in place, corrective actions were not always appropriately validated. IRS has developed and implemented a remedial action process to address deficiencies in its information security policies, procedures, and practices. However, this remedial action process was not working as intended, since the verification process used to determine whether remedial actions were implemented was not always effective. For example, IRS had informed us that it had completed actions to close 65 recommendations related to previously identified weaknesses, however, we determined that 16 of the corrective actions did not mitigate or correct the underlying control deficiencies. Without a sound remediation process, IRS will not have assurance that it has taken the necessary actions to correct weaknesses in its policies, procedures, and practices. We have previously identified a similar weakness and recommended that IRS implement a revised remedial action verification process that ensures actions are fully implemented, but the condition continued to exist at the time of our review. Continuity of operations planning, which includes contingency planning and disaster recovery planning, is a critical component of information protection. To ensure that mission-critical operations continue, it is necessary to be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. In addition, testing contingency plans is essential to determine whether the plans will function as intended in an emergency situation. FISMA requires that agencywide information security programs include plans and procedures to ensure continuity of operations. IRS contingency planning policy requires, among other things, that contingency plans be reviewed and tested at least annually. Although contingency plans were in place, IRS recognizes the need for improvements. The agency has completed contingency plans for the eight systems we reviewed. Additionally, it has reviewed/updated and tested these contingency plans annually. The plans also identified critical business processes, correcting a weakness we reported last year. Although the specific plans we reviewed did not have any shortcomings, IRS’s comprehensive plan for addressing information security weaknesses recognizes the need for further efforts to improve the agency’s contingency planning, through initiatives involving disaster recovery planning, some of which will not be completed until 2011. Until it completes these efforts, IRS is at increased risk of not being able to effectively recover and continue operations when an emergency occurs. IRS has made progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial systems, and developing and documenting a framework for its agencywide information security program. Information security weaknesses—both old and new— continue to impair the agency’s ability to ensure the confidentiality, integrity, and availability of financial and taxpayer information. These deficiencies represent a material weakness in IRS’s internal controls over its financial and tax processing systems. A key reason for these weaknesses is that the agency has not yet fully implemented certain key elements of its agencywide information security program. The financial and taxpayer information on IRS systems will remain particularly vulnerable to insider threats until the agency (1) begins to address and correct prior weaknesses across the service and (2) fully implements a comprehensive agencywide information security program that ensures risk assessments are appropriately reviewed for all systems, tests and evaluations of controls for systems are comprehensive, and the remedial action process effectively validates corrective actions. Until IRS takes these steps, financial and taxpayer information are at increased risk of unauthorized disclosure, modification, or destruction, and the agency’s management decisions may be based on unreliable or inaccurate financial information. In addition to implementing our previous recommendations, we recommend that you take the following two actions to implement an agencywide information security program: ensure risk assessments for IRS systems are reviewed at least annually, implement steps to improve the scope of testing and evaluating controls, such as those for weak passwords. We are also making eight detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct specific information security weaknesses related to authorization, physical security, and configuration management identified during this audit. In providing written comments (reprinted in app. II) on a draft of this report, the Commissioner of Internal Revenue stated that the security and privacy of taxpayer information is of the utmost importance to the agency, and noted that IRS is committed to securing its computer environment as it continually evaluates processes, promotes user awareness and applies innovative ideas to increase compliance. He also stated that the agency is working to improve its security posture, and will develop a detailed corrective action plan addressing each of our recommendations. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide us with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and the Treasury Inspector General for Tax Administration. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Nancy Kingsbury at (202) 512-2700 or Gregory C. Wilshusen at (202) 512-6244. We can also be reached by e-mail at [email protected] and [email protected]. Key contributors to this report are listed in appendix III. The objectives of our review were to determine (1) the status of the Internal Revenue Service’s (IRS) actions to correct or mitigate previously reported information security weaknesses and (2) whether controls over key financial and tax processing systems were effective in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer information. This work is part of our audit of IRS’s financial statements for the purpose of supporting our opinion on internal controls over the preparation of those statements. To determine the status of IRS’s actions to correct or mitigate previously reported information security weaknesses, we reviewed prior GAO reports to identify previously reported weaknesses and examined IRS’s corrective action plans to determine which weaknesses IRS reported corrective actions as being completed. For those instances where IRS reported it had completed corrective actions, we assessed the effectiveness of those actions by: testing the complexity and expiration of passwords on servers to determine if strong password management was enforced; analyzing users’ system authorizations to determine whether they had more permissions than necessary to perform their assigned functions; observing data transmissions across the network to determine whether sensitive data was being encrypted; observing whether system security software was logging successful testing and observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; inspecting key servers and workstations to determine whether critical patches had been installed or were up-to-date; and examining access responsibilities to determine whether incompatible functions were segregated among different individuals. We evaluated IRS’s implementation of these corrective actions for three data centers and an additional facility. To determine whether controls over key financial and tax processing systems were effective, we considered the results of our evaluation of IRS’s actions to mitigate previously reported weaknesses at three data centers and the additional facility. We concentrated our evaluation primarily on threats emanating from sources internal to IRS’s computer networks and focused on three critical applications and their general support systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using the requirements identified by the Federal Information Security Management Act, which establishes key elements for an effective agencywide information security program, we evaluated IRS’s implementation of its security program by analyzing IRS’s risk assessment process and risk assessments for eight key IRS financial and tax processing systems to determine whether risks and threats were documented; analyzing IRS’s policies, procedures, practices, and standards to determine whether sufficient guidance was provided to personnel responsible for securing information and information systems; analyzing security plans for eight systems to determine if management, operational, and technical controls were documented and if security plans were updated; examining training records for personnel with significant responsibilities to determine if they received training commensurate with those responsibilities; analyzing test plans and test results for eight IRS systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; observing IRS’s process to correct weaknesses and determining whether remedial action plans were complete; and examining contingency plans for eight IRS systems to determine whether those plans had been tested or updated. We also reviewed or analyzed previous reports from the Treasury Inspector General for Tax Administration and GAO; and discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. In addition to the individuals named above, David Hayes (Assistant Director), Jeffrey Knott (Assistant Director), Harold Lewis (Assistant Director), Larry Crosland, Mark Canter, Sharhonda Deloach, Neil Doherty, Caryn English, Edward Glagola, Nancy Glover, Rebecca LaPaze, Kevin Metcalfe, Zsaroq Powe, Eugene Stevens, and Christy Tyson made key contributions to this report. | The Internal Revenue Service (IRS) relies extensively on computerized systems to carry out its demanding responsibilities to collect taxes (about $2.7 trillion in fiscal years 2008 and 2007), process tax returns, and enforce the nation's tax laws. Effective information security controls are essential to protect financial and taxpayer information from inadvertent or deliberate misuse, improper disclosure, or destruction. As part of its audits of IRS's fiscal years 2008 and 2007 financial statements, GAO assessed (1) the status of IRS's actions to correct previously reported weaknesses and (2) whether controls were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies and procedures and other documents; tested controls over key financial applications; and interviewed key agency officials. IRS has continued to make progress in correcting previously reported information security weaknesses. It has corrected or mitigated 49 of the 115 weaknesses that GAO reported as unresolved during its last audit. For example, the agency (1) implemented controls for unauthenticated network access and user IDs on the mainframe, (2) encrypted sensitive data going across its network, (3) improved the patching of critical vulnerabilities, and (4) updated contingency plans to document critical business processes. However, most of the previously identified weaknesses remain unresolved. For example, IRS continues to, among other things, allow sensitive information, including IDs and passwords for mission-critical applications, to be readily available to any user on its internal network, and grant excessive access to individuals who do not need it. According to IRS officials, they are continuing to address the uncorrected weaknesses and, subsequent to GAO site visits, had completed additional corrective actions. Despite IRS's progress, information security control weaknesses continue to jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. IRS did not consistently implement controls that were intended to prevent, limit, and detect unauthorized access to its systems and information. For example, IRS did not always (1) enforce strong password management for properly identifying and authenticating users; (2) authorize user access, including access to personally identifiable information, to permit only the access needed to perform job functions; (3) encrypt certain sensitive data; (4) effectively monitor changes on its mainframe; and (5) physically protect its computer resources. A key reason for these weaknesses is that IRS has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. Specifically, IRS did not annually review risk assessments for certain systems, comprehensively test for certain controls, or always validate the effectiveness of remedial actions. Until these weaknesses are corrected, the agency remains particularly vulnerable to insider threats and IRS is at increased risk of unauthorized access to and disclosure, modification, or destruction of financial and taxpayer information, as well as inadvertent or deliberate disruption of system operations and services. |
The Army and Marine Corps maintain organic depot maintenance capabilities that are designed to retain, at a minimum, a ready, controlled source of technical competence and resources to meet military requirements. In fiscal year 2008, DOD budgeted about $5.6 billion for the five Army and two Marine Corps maintenance depots and maintained a workforce of about 26,000 personnel at these facilities. Depot-level maintenance and repair involves materiel maintenance or repair requiring the overhaul, upgrading or rebuilding of parts assemblies and subassemblies, and testing and reclamation of equipment as necessary, regardless of the source of funds for the maintenance or repair or the location at which the maintenance or repair is performed. Army and Marine Corps depots work on a wide range of weapon systems and military equipment, such as combat vehicles, aircraft, and communications and electronics equipment. Each of the services’ depot-level activities has been designated as a Center for Industrial and Technical Excellence in the recognized core competency of the designee, pursuant to Section 2474 of Title 10, U.S Code. Table 1 describes the principal work performed at each Army and Marine Corps depot. During the late 1980s and the late 1990s, Army and Marine Corps maintenance depots—like other DOD depots—were significantly downsized as a result of reductions in the armed forces and DOD’s decision to outsource many logistics activities, including depot maintenance, to the private sector. These downsizing efforts contributed to decreased workloads at the depots and diminished their capability, reliability, and cost effectiveness for supporting requirements for legacy systems; it also reduced their opportunities to acquire work for new and modified weapon systems. The downsizing also affected the depots’ ability to obtain investments in facilities, equipment, and human capital to support their long-term viability and to ensure that they remained a key resource for repair of new and modified systems. As a result, DOD’s depots had become facilities that primarily repaired aging weapon systems and equipment. In 2003, Army and Marine Corps depots experienced an increase in workload, stemming from overseas contingency operations in Iraq and Afghanistan. Contributing to this increase were efforts to reset systems such as the High Mobility Multipurpose Wheeled Vehicle, the M1 Abrams Tank, and the Bradley Fighting Vehicle, as well as work related to armor fabrication and the armoring of systems such as the Medium Tactical Vehicle Replacement. Despite the increase in workload, the Army and Marine Corps lacked direction from DOD on a department wide strategic depot plan that clarified the future role of the military depots. We reported in April 2003 that the services and DOD had not implemented comprehensive strategic plans for defense maintenance to revitalize or protect the future viability of their depot facilities, equipment, and workers. In that report, we recommended that the services develop depot strategic plans that are linked to the services’ missions and objectives, and that DOD develop a strategic plan that provides guidance and a schedule for identifying long-term capabilities to be provided in government-owned and -operated plants. The House Armed Services Committee has previously encouraged DOD to develop a comprehensive strategy to ensure that the depots are viably positioned, and that they have the workforce, equipment, and facilities they need to maintain efficient operations to meet the nation’s current and future requirements. In March 2007, the Under Secretary of Defense for Acquisition, Technology, and Logistics approved the DOD Depot Maintenance Strategy and Implementation Plans, which articulated OSD’s strategy and plans for ensuring that the department’s organic depot maintenance infrastructure is postured and resourced to meet the national security and management challenges of the 21st century. The plan also specified that each military service was responsible for conducting strategic planning for depot maintenance that focused on achieving DOD’s strategy. OSD required the services to submit the results of their strategic plans no later than 6 months after the publication of DOD’s plan. In March 2007, the Deputy Under Secretary of Defense for Logistics and Materiel Readiness modified this requirement to have each service submit either its published depot maintenance strategic plan, or a report describing the process being used to develop its strategic plan, and a target date for completing the plan by September 1, 2007. The Army and Marine Corps finalized and submitted their strategic plans to OSD in 2008. In addition, the Army developed an implementation plan to accompany its strategic plan. The Marine Corps did not produce an implementation plan. While the depot maintenance strategic plans developed by the Army and the Marine Corps identify key issues affecting the depots, they do not fully address all of the elements required to achieve a results-oriented management framework, and they are not fully responsive to OSD’s direction to the services for developing their plans. Furthermore, these plans do not address uncertainties in workload that affect the depots’ ability to plan for meeting future maintenance requirements. Finally, they do not show whether and how the depots will have a role in planning for the sustainment of new and modified weapon systems. As a result of these deficiencies in their strategic plans, the Army and Marine Corps may lack assurance that their depots are postured and resourced to meet future maintenance requirements. The Army’s and the Marine Corps’ depot maintenance strategic plans do not fully address all of the elements that are needed for a comprehensive results-oriented management framework. In addition, the plans are not fully responsive to OSD’s direction to the services for developing these plans. Our prior work has shown that organizations need sound strategic management planning in order to identify and achieve long-range goals and objectives. We have identified critical elements that should be incorporated in strategic plans to establish a comprehensive, results- oriented management framework. A results-oriented management framework provides an approach whereby program effectiveness is measured in terms of outcomes or impact, rather than outputs, such as activities and processes. The framework includes critical elements such as a comprehensive mission statement, long-term goals and objectives, approaches for accomplishing goals and objectives, stakeholder involvement, external factors that may affect how goals and objectives will be accomplished, performance goals that are objective, quantifiable, and measurable, resources needed to meet performance goals, performance indicators or metrics that measure outcomes and gauge progress, and an evaluation plan that monitors the goals and objectives. OSD also directed the services to include many of the elements in their depot maintenance strategic plans. Specifically, the OSD criteria stated that each military service’s plan should include a comprehensive mission statement, general goals and objectives (including outcome-related goals and objectives), a description of how the goals and objectives are to be achieved, metrics that will be applied to gauge progress, key factors external to the respective service and beyond its control that could significantly affect the achievement of their general goals and objectives, and descriptions of the program evaluations used in establishing, monitoring, or revising goals and objectives, with a schedule for future program evaluations. Furthermore, OSD directed the services to address a number of specific issues in their strategic plans, including logistics transformation, core logistics capability assurance, workforce revitalization, and capital investment. OSD wanted the services, at a minimum, to address these four issues because it believed they were critical to ensuring the depots would be postured and resourced to meet future requirements. Based on our evaluation of the Army’s and Marine Corps’ depot maintenance strategic plans, we found that the plans partially address the elements for a results-oriented management framework. While the services’ strategic plans address key issues affecting the depots and contain mission statements, along with long-term goals and objectives, they do not fully address all the elements needed for sound strategic planning. Elements not fully addressed in the strategic plans are Approaches for accomplishing goals and objectives; Stakeholder involvement in developing the plan; External factors that may affect how goals and objectives will be Performance goals that are objective, quantifiable, and measurable; Resources required to meet performance goals; Performance indicators or metrics that measure outcomes and gauge progress of the goals and objectives; and An evaluation plan that monitors the goals and objectives. Table 2 summarizes, based on our evaluation, the extent to which the Army and Marine Corps depot maintenance strategic plans address the strategic planning elements needed for a comprehensive results-oriented management framework. The Army’s and Marine Corps’ depot maintenance strategic plans partially address logistics transformation, core logistics capability assurance, workforce revitalization, and capital investment—the four issues that OSD directed each service, at a minimum, to include in their plans. Table 3 summarizes, based on our evaluation, the extent to which the Army and Marine Corps depot maintenance strategic plans discuss these four issues. Army and Marine Corps officials involved with the development of the service strategic plans acknowledged that their plans do not fully address the OSD criteria, but they stated that the plans nevertheless address issues they believe are critical to maintaining effective, long-term depot maintenance capabilities. An official in the Office of the Deputy Chief of Staff of the Army, G4, who was involved with the Army’s depot maintenance strategic plan acknowledged that the Army’s plan does not fully address OSD’s criteria. According to this official, the Army’s plan focuses on issues of greatest priority to the service’s depots. The official added that the OSD criteria lacked clear and specific instructions to the services. According to an official in the Marine Corps’ Logistics Plans, Policy, and Strategic Mobility Division who was involved with that service’s depot maintenance strategic plan, the Marine Corps’ plan was intended to be only an overarching outline and was not intended to provide the detailed “nuts and bolts” that would be needed for implementation. The Army and Marine Corps have not updated their strategic plan since initially submitting them to OSD in 2008, and since that time neither service has received notice from OSD that its plan did not meet OSD’s criteria or should be revised and updated. An OSD official in the Office of the Deputy Secretary of Defense for Logistics and Materiel Readiness told us that although the services’ strategic plans are not completely responsive to OSD’s direction, they represent a good first start on developing a strategic plan. Although OSD plans to require the services to update their plans, this official told us that OSD would wait until after completion of the Quadrennial Defense Review. That review is to be completed in early 2010. According to the OSD official, it would be counterproductive to ask the services to update their strategic plans in 2009 and then update them again following the Quadrennial Defense Review. The Army’s and Marine Corps’ depot maintenance strategic plans do not provide strategies for mitigating and reducing uncertainties in future workloads that affect the depots’ ability to plan for meeting future maintenance requirements. These uncertainties stem primarily from a lack of information from the depots’ major commands on workload that will replace current work on legacy systems, which is expected to decline, as well as workload associated with new systems that are in the acquisition pipeline (which is discussed further in the next section of this report). Workload uncertainties hinder effective planning for meeting future depot maintenance requirements because workload is a key driver in planning for the necessary capabilities such as workforce skills, equipment, and infrastructure. Depot officials said that these resources require significant lead times to develop and put in place to effectively respond to the customers’ needs. In the absence of timely and reliable data on future workloads, the depots’ efforts to identify and develop needed capabilities and to conduct workforce planning may be adversely affected. The depots’ major commands generate workload projections from workload forecasting systems and are based on past history and discussions with customers about workload planned for the depots. The Army uses the Army Workload and Performance System as a tool for projecting future workloads, coordinating personnel requirements, managing resources, and tracking performance. The Marine Corps use the Principle End Item Stratification Module within the Material Capability Decision Support System to determine its depot level maintenance requirements. Army and Marine Corps guidance identifies workload as a key planning factor for supporting the expected life of a materiel system. For example, Army Regulation 750-1, Army Materiel Maintenance Policy, states that a depot maintenance capability will be established and sustained on the basis of workload generated by those weapon systems and materiel that is essential to the completion of the Army’s primary roles and mission. The Marine Corps’ Depot Level Maintenance Program guide establishes general guidelines for planning workloads for the depots. Although the services have guidance, systems, and processes for workload planning, depot officials told us that the workload forecasts they receive from their major commands are unreliable beyond the current fiscal year. Officials cited various factors that contribute to workload uncertainties, such as the volatility in workload requirements; changing wartime environment; budget instability, including the timing of and heavy reliance on supplemental funding; and unanticipated changes in customer orders. Depot officials also cited other factors such as delayed work returning from theater and workload cancellations. Depot officials told us that they were not in a position to address these factors on their own, and that reducing or mitigating future workload uncertainties would require substantial involvement of the service headquarter organizations and major commands that are responsible for managing the depots. Officials at the TACOM Life Cycle Management Command, one of the commands that support two Army depots, said that they too had difficulty forecasting workload flowing to the depots because of factors that were outside their control, such as technology development and surge requirements. Marine Corps Logistics Command officials said that they are currently implementing an enterprise-level maintenance program that focuses on how to better identify future year requirements. Army and Marine Corps depot officials expressed particular concern that they lacked information on workloads that might replace some of their current work on legacy systems that is expected to decline due to various factors, including a drawdown of U.S. forces resulting from a decline in combat operations in Iraq and from the 2005 BRAC decisions. For example, Anniston Army depot’s work on the M1 Abrams tank fleet is projected to decrease from about 6,000 tanks to 2,500 tanks by fiscal year 2013, as a result of the Army’s projected decline in demand. In addition, the 2005 BRAC decision is expected to reduce future workload at the Marine Corps’ Barstow depot by about 30 percent by fiscal year 2011, when BRAC is fully implemented. Moreover, Army and Marine Corps officials noted that the surge in workload resulting from operations in Iraq could be masking a decline in traditional organic depot work that occurred during this operation. Furthermore, these officials expressed concern that they lack information on workload associated with new and modified systems in the acquisition pipeline that will require future maintenance support at the depots. Depot officials also said that they are not involved in the sustainment portion of the life cycle management planning process for new and modified systems. Army Aviation and Missile Command officials said that the life cycle sustainment planning process is a responsibility of the program manager. While the command is operationally aligned with the program manager and plays a significant role in deciding how weapon systems will be supported, they do not include the depots in this planning process. Both the Army’s and the Marine Corps’ depot maintenance strategic plans recognize that forecasting workload is important to the depots. However, while the Army’s strategic plan notes the need to identify sufficient work for its depots, it does not explain how or when the Army will take steps to develop more reliable forecasts or take other steps that could reduce or mitigate depot workload uncertainties. The Marine Corps’ strategic plan also mentions workload estimating, stating that the Marine Corps plans to forecast depot maintenance workload with sufficient lead time to allow it to analyze the required depot capabilities. However, the strategic plan does not specify how the depots will be involved in this process, how this process will be accomplished, or who is going to be held accountable to ensure that this process is performed. Neither the Army’s nor the Marine Corps’ strategic plans address whether and how the depots will be integrated into the sustainment portion of the life cycle management planning process for new and modified weapon systems. During this process, weapon system program managers plan for how and where a new or modified system will be supported and maintained in the future—decisions that have a profound impact on planning future depot workload and related infrastructure, capital investments, and workforce requirements. According to depot officials, they are not involved in the program managers’ planning because no clear process exists that would enable them to have input. The department’s overarching acquisition guidance, DOD Directive 5000.01, states that the program manager shall be the single point of accountability for accomplishing program objectives for total life-cycle systems management, including sustainment. While program managers are required to assign work to the depots to maintain core capabilities, they have no formal requirement to include the depots in the sustainment planning process to determine how a weapon system will be supported. In prior reports, we have noted that program managers often make decisions to contract out the repair of new and modified systems without considering the impact of these decisions on the requirement to maintain core capability for essential systems in military depots. Our recent report on core depot maintenance indicates that shortcomings in DOD’s acquisition guidance and its implementation have resulted in DOD program managers not identifying and establishing required core capability at military depots in a timely manner—capability that will be needed to support future maintenance requirements for new and modified systems. The depots’ lack of involvement in life cycle management planning limits their ability to influence how weapon systems being acquired by their service will be sustained, and also plan for and develop capabilities they will need to support these systems in the future. For example, even though Red River Army depot is designated as the primary repair facility for Bradley Fighting Vehicles, depot planners stated that they were not involved in the Army’s life cycle management planning process to decide which facility would have full capability to perform the test and repair work on the newer model of the Bradley A3. As a result, this depot received minimal work associated with this weapon system, while the majority of this work—including the testing on the turret and the major overhaul of the system—went to a private contractor. According to depot officials, including the depots in the sustainment portion of the life cycle management planning process cannot be achieved without full participation and coordination between the sustainment and acquisition communities, and without consistent communication between the services’ major commands and the depots during the process of determining how new and modified systems will be sustained. The Army Materiel Command’s Industrial Base Strategic Plan notes the importance of developing a process that provides closer interface between the acquisition and sustainment communities to ensure that future weapon system requirements are matched with organic sustainment capabilities early in the acquisition process. Also, the Marine Corps Logistics Command’s Alignment and Integration Strategic Plan emphasizes the importance of this command to assist program managers with the planning and execution of total life cycle management responsibilities for their weapon systems. Without a clear process to integrate the depots in the sustainment portion of the life cycle management planning process, the depots cannot determine what capabilities are needed to plan for future workloads and what other resources are needed to support new and modified weapon systems. The Army and Marine Corps face some challenges to ensure that their maintenance depots will remain operationally effective, efficient, and capable of meeting future maintenance requirements. The increased reliance on contractor support for weapon systems, including contractor support provided through performance-based logistics, and the continuing uncertainties about workload, increase the risk that the depots may not be postured and resourced to meet future requirements. These issues, if not addressed, could adversely affect materiel readiness and future depot operations and potentially lead to equipment shortages and delays in meeting the combatant commander’s requirements. While strategic planning is a valuable management tool to help mitigate the challenges facing the depots, the Army and Marine Corps plans as currently written are not comprehensive enough for this purpose. The plans do not fully address all the elements needed for a results-oriented management strategy or the specific issues that OSD directed each service, at a minimum, to include in their plans. Furthermore, until the services address problems caused by workload uncertainties, the depots will continue to have difficulties planning for future maintenance requirements. Regarding workload uncertainties for systems that have yet to enter the defense inventory, without a clear process for integrating the depots into the sustainment portion of the life cycle management planning process, the depots may continue to lose key opportunities to develop needed capabilities that would enable them to provide depot level maintenance support for new and modified systems. To provide greater assurance that the military depots will be postured and resourced to meet future maintenance requirements, we recommend that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to take the following three actions to update the depot maintenance strategic plans: Fully address all elements needed for a comprehensive results-oriented management framework, including those elements partially addressed in the current plans-—such as the approaches for accomplishing goals and objectives, stakeholder involvement, external factors that may affect how goals and objectives will be accomplished, performance goals that are objective, quantifiable, and measurable, resources needed to meet performance goals, performance indicators used to measure outcomes and gauge progress, and an evaluation plan that monitors goals and objectives. Fully address the four specific issues of logistics transformation, core capability assurance, workforce revitalization, and capitalization, consistent with OSD criteria provided to the services. Develop goals and objectives, as well as related strategic planning elements, aimed at mitigating and reducing future workload uncertainties. As part of this last effort, the Army and Marine Corps should develop a clear process for integrating the depots’ input into the sustainment portion of the life cycle management planning process for systems in the acquisition pipeline. In written comments on a draft of this report, DOD concurred with all three of our recommendations to provide greater assurance that the military depots will be postured and resourced to meet future maintenance requirements. DOD’s written comments are reprinted in appendix IV. The department concurred with our first two recommendations to direct the Army and the Marine Corps to update their depot maintenance strategic plans to fully address all elements needed for a comprehensive results-oriented management framework, and fully address the four specific issues of logistics transformation, core capability assurance, workforce revitalization, and capitalization, consistent with OSD criteria provided to the services. DOD stated that they will reiterate and incorporate these recommendations into the next update of the strategic plan. While this is a step in the right direction, DOD did not indicate what steps, if any, it plans to take to ensure that the Army and Marine Corps will also incorporate these recommendations into their depot maintenance strategic plans. Therefore, DOD may need to take further action by following up with the Army and Marine Corps to ensure that they fully incorporate these recommendations into their depot maintenance strategic plans. DOD also concurred with our third recommendation to direct the Army and Marine Corps to develop goals and objectives for mitigating and reducing future workload uncertainties and integrate the depot’s input into the sustainment portion of the life cycle management planning process. DOD stated that the Army has initiated several actions to mitigate and reduce uncertainties in projecting future depot workload and to ensure viability of the depot workforce. DOD said that the Army has established integrated product teams to address core workload shortfalls and developed an action plan and the resources and time line required to transfer sufficient workload from the original equipment manufacturers to the applicable Army depot to meet core requirements. In addition, DOD said that the Army has begun to develop policy that would require review of Core Logistic Assessments / Core Depot Assessments and Source of Repair Analyses during the milestone decision review process, and to develop a comprehensive training package for export to program executive officers and program managers, Life Cycle Management Commands, and depots. While these are positive steps that would help to improve future workload planning, these steps focus on addressing core requirements and do not fully address the need to mitigate and reduce workload uncertainties or to include the depots’ input into the sustainment portion of the life cycle management planning process for systems in the acquisition pipeline. We continue to believe the depots will have difficulties planning for future maintenance requirements until the services develop solutions for mitigating and reducing uncertainties across the full range of the depots’ workloads. We also continue to believe that without a clear process for integrating the depots into the sustainment portion of the life cycle management planning process, the depots will continue to lose key opportunities to develop capabilities that would enable them to provide depot-level support for systems in the acquisition pipeline. The department reiterated its plan to incorporate our recommendations into the next update of the strategic plan. As we stated above with regard to our first two recommendations, DOD may need to take further action by following up with the Army and Marine Corps to ensure that they fully incorporate this recommendation into their depot maintenance strategic plans. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense, the Secretaries of the Army, the Navy, the Air Force, and the Commandant of the Marine Corps. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have questions about this report, please contact me at (202) 512-8365 or [email protected]. Key contributors to this report are listed in appendix VI. To evaluate the extent to which the Army’s and Marine Corps’ strategic plans provide a comprehensive strategy for meeting future depot maintenance requirements, we assessed the Army’s April 2008 Depot Maintenance Enterprise Strategic Plan, and the Marine Corps February 2008 Depot Maintenance Strategic Plan to determine if they are consistent with the criteria for developing a comprehensive results-oriented management framework as indicated in GAO’s prior work on strategic management plans. While the Office of the Secretary of Defense (OSD) required all the services to prepare and submit such plans to them, we decided to focus our work on the Army’s and Marine Corps’ plans because of their significant roles in supporting overseas contingency operations in Iraq and Afghanistan. We also determined if the Army’s and Marine Corps’ strategic plans for depot maintenance fully addressed the criteria for developing a strategic plan specified in the Department of Defense (DOD) March 2007 Depot Maintenance Strategy and Implementation Plans. Furthermore, we determined if the Office of the Under Secretary of Defense for Logistics and Materiel Readiness assessed the services’ depot management strategic plans and provided follow on actions to ensure the plans meet their criteria. In addition, we reviewed and addressed issues regarding uncertainties in projecting future workloads, which is necessary for effective depot planning. We also interviewed depot management officials to determine the depots’ participation in the sustainment portion of the life cycle management planning process to effectively plan and prepare for future maintenance work and related capabilities. To gain further perspective on the services’ efforts to plan for the future of the depot maintenance facilities, we interviewed and obtained documentation from officials at Headquarters, Department of the Army, Washington, D.C.; U.S. Army Materiel Command, Fort Belvoir, Virginia; Headquarters Marine Corps, Arlington, Virginia; Marine Corps Systems Command, Quantico, Virginia; and Marine Corps Logistics Command, Albany, Georgia. We also visited, interviewed, and obtained documentation from officials at the Army’s five maintenance depots that perform organic level maintenance at Anniston Army Depot, Anniston, Alabama; Corpus Christi Army Depot, Corpus Christi, Texas; Letterkenny Army Depot, Chambersburg, Pennsylvania; Red River Army Depot, Texarkana, Texas; and Tobyhanna Army Depot, Tobyhanna, Pennsylvania. In addition, we visited, interviewed depot officials and obtained documentation from the Marine Corps’ two maintenance depots that perform organic level maintenance at Maintenance Center Albany, Georgia and Maintenance Center Barstow, California. Furthermore, we obtained data and information on actions aimed at improving depot productivity at the Army and Marine Corps depots and data on the depots’ workforce trends from fiscal year 1999 through fiscal year 2008. We determined that the data used were sufficiently reliable for our purposes. We conducted this performance audit from August 2007 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Both the Army and Marine Corps depots have reported actions they have taken to improve their productivity. The depots have reported that they have improved their maintenance operations’ productivity and efficiency through the use of several process improvements including Lean, Six Sigma, Value Stream Mapping, and Theory of Constraints. They report that such improvements have allowed them to identify and reduce or eliminate unnecessary work-related functions and other impediments that created restrictions or “bottlenecks” in their production processes and have resulted in increases in the number of weapon systems or other components processed, reductions in repair cycle times, and reductions in the cost of production. The Army and Marine Corps have issued a policy and a guidebook, respectively, aimed at improving the depots’ repair processes, including information on assessing the depots’ progress in making, sharing, and sustaining improvements and in measuring overall productivity. We questioned depot officials about the data associated with these improvements and relied on their professional judgment concerning the adequacy and reliability of the data. Table 4 shows information reported by the Army depots on the results of initiatives to improve the repair process for selected weapon systems— one from each of the five Army depots. The Army depots generally assess the results of their productivity improvements based on increases in the number of units produced, reductions in repair cycle times, and reductions in production costs. The third column shows the period during fiscal years 2004 through 2007 in which the initiative was implemented. The fourth column shows the average reduction in repair cycle time expressed in days, and the fifth column shows this reduction expressed as a percentage by which repair time was reduced. The final column shows the estimated cost reduction or savings that the Army depots reported for the period. Army depot officials told us that there is limited sharing of lessons learned or cross application among the depots and that increased sharing and cross application could contribute to additional reductions in repair days and cost savings or cost avoidances. Table 5 shows information reported by the Marine Corps depots on the results of initiatives to improve the repair process for selected weapons systems repaired at its two depots for fiscal years 2004 through 2007. The Marine Corps depots generally assess the results of their productivity improvements based on reductions in repair cycle times. The second column shows the average number of days taken for the repair cycle in fiscal year 2004, the baseline year before the depots initiated their process improvement initiatives. The third column shows the average number days the depots reported for repair cycle time in fiscal year 2007, after implementing process improvement initiatives. The fourth and fifth columns show the reported reduction in repair time expressed as number of days and the percentage by which repair time was reduced. The Marine Corps depots generally do not either capture or report cost savings or cost avoidances resulting from such improvements. A Marine Corps official responsible for managing the results of the depots’ improvement told us that some of the reductions in repair days were achieved by using overtime and multiple shifts. The official also told us that there is limited sharing of lessons learned or cross application among the depots and that increased sharing and cross application could contribute to additional reductions in repair days and in cost savings or cost avoidances. Workforce levels for the Army and Marine Corps depots have been increasing along with the workloads since fiscal year 2003. The depots have accommodated the surge in workload by hiring primarily temporary and contract employees. Depot officials told us they hired temporary and contract workers in lieu of permanent government workers due to uncertainties about the duration of the overseas contingency operations in Iraq and Afghanistan. The depots plan to reduce temporary and contract labor as workload related contingency operations decreases. Although uncertainties about future workload inhibit their workforce planning, we found that the depots’ workforce strategic planning addresses anticipated personnel and skill gaps. For example, while the workloads have increased, the depots have been able to maintain a skilled workforce. In addition, with a large percentage of depot workers becoming eligible to retire over the next 5 years, some of the depots are working with local community colleges to provide specialized programs focused on skills needed by the depots. The Army and Marine Corps depots’ workforce was relatively stable from fiscal year 1999 though fiscal year 2002. The depots report that the increase in workload associated with the Global War on Terrorism (GWOT) began during fiscal year 2003. Before GWOT, the total depot workforce was more than 89 percent permanent government employees, but at the end of fiscal year 2008 permanent government employees made up only 62 percent of the total depot workforce. After remaining relatively constant from fiscal year 1999 through fiscal year 2002, total workforce increased from fiscal year 2003 through fiscal year 2008, along with the increases in workload associated with GWOT. From fiscal year 2003 through fiscal year 2008, the Army depots’ workforce increased by 106 percent and the Marine Corps’ by 99 percent. Figures 1 and 2 illustrate these changes in the Army’s and the Marine Corps’ depots’ workforces from fiscal year 1999 through fiscal year 2008. The trends reflected in figures 1 and 2 show marked changes in the composition of the Army’s and Marine Corps’ depots’ workforces since fiscal year 2003. The largest increases have been in the number of temporary workers and contract labor hired in lieu of permanent staff. As GWOT continued and the workload continued to increase, the depots continued to hire more temporary and contract workers to accommodate the increased workload. The depots plan to reduce the number of temporary and contract workers as they employ GWOT-related workload decreases. As figures 1 and 2 illustrate, in fiscal year 2008, 37 percent of the Army depots’ workforce and 48 percent of the Marine Corps depots’ workforce were comprised of temporary and contract workers. Specifically, temporary workers represented about 15 percent of the Army depots’ workforce and 25 percent of the Marine Corps depots’ workforce. Contract workers represented about 22 percent of the Army depots’ workforce and about 23 percent of the Marine Corps depots’ workforce. We have previously reported that the depots may face challenges that could inhibit effective strategic workforce planning. These challenges include the high average age of workers, difficulty in maintaining depot viability if large numbers of eligible skilled workers retire, and lack of an available source of trained and skilled personnel. The Army and Marine Corps depots’ have reduced the average age of their permanent workers. For fiscal year 2008, the age of permanent workers in the Army’s depots averaged 45, and the age of permanent workers in the Marine Corps’ depot averaged 46. Since fiscal year 1999, the average age of the Army’s permanent depot workers has decreased by 9 percent, while that of the Marine Corps’ has decreased by 12 percent. Depot officials attributed this reduction to the retirement of older permanent workers; the availability of younger, qualified applicants; and in-house training programs. The depots have developed workforce strategic plans that address current and anticipated personnel and skill gaps. These plans include maintaining a mix of personnel with the skills and capabilities needed to satisfy current workload requirements. According to Army and Marine Corps depot officials, permanent, skilled workers are readily available. Further, the depots forecast a high rate of retirement eligibility in the next 5 years, and they are taking steps to address the potential loss of skilled personnel. According to Army data, 34 percent of the Army’s permanent depot workforce will be eligible for retirement in fiscal year 2013. According to Marine Corps data, 43 percent of the Marine Corps’ permanent depot workforce will also be eligible for retirement in fiscal year 2013. Both services’ depots track and monitor personnel who may be eligible to retire soon, considering their skills in order to address potential skill gaps in the future workforce. Both Army and Marine Corps depots address this potential loss of personnel and skills in their workforce strategic plans, and they have instituted various types of recruitment and training programs designed to attract and train workers. In addition to the contact named above, Julia Denman and Tom Gosling, Assistant Directors; Larry Bridges; John Clary; Joanne Landesman; Latrealle Lee; Katherine Lenane; and Christopher Watson made key contributions to this report. | The Army and Marine Corps maintenance depots provide critical support to ongoing military operations in Iraq and Afghanistan and are heavily involved in efforts to reset the force. The Department of Defense (DOD) has an interest in ensuring that the depots remain operationally effective, efficient, and capable of meeting future maintenance requirements. In 2008, in response to direction by the Office of the Secretary of Defense (OSD), the Army and the Marine Corps each submitted a depot maintenance strategic plan. Our objective was to evaluate the extent to which these plans provide comprehensive strategies for meeting future depot maintenance requirements. GAO determined whether the plans were consistent with the criteria for developing a results-oriented management framework and fully addressed OSD's criteria. The depot maintenance strategic plans developed by the Army and Marine Corps identify key issues affecting the depots, but do not provide assurance that the depots will be postured and resourced to meet future maintenance requirements because they do not fully address all of the elements required for a comprehensive, results-oriented management framework. Nor are they fully responsive to OSD's direction for developing the plans. While the services' strategic plans contain mission statements, along with long-term goals and objectives, they do not fully address all the elements needed for sound strategic planning, such as external factors that may affect how goals and objectives will be accomplished, performance indicators or metrics that measure outcomes and gauge progress, and resources required to meet the goals and objectives. Also, the plans partially address four issues that OSD directed the services, at a minimum, to include in their plans, such as logistics transformation, core logistics capability assurance, workforce revitalization, and capital investment. Army and Marine Corps officials involved with the development of the service strategic plans acknowledged that their plans do not fully address the OSD criteria, but they stated that the plans nevertheless address issues they believe are critical to maintaining effective, long-term depot maintenance capabilities. The Army's and Marine Corps' plans also are not comprehensive because they do not provide strategies for mitigating and reducing uncertainties in future workloads that affect the depots' ability to plan for meeting future maintenance requirements. Such uncertainties stem primarily from a lack of information on (1) workload that will replace current work on existing systems, which is expected to decline, and (2) workload associated with new systems that are in the acquisition pipeline. According to depot officials, to effectively plan for future maintenance requirements, the depots need timely and reliable information from their major commands on both the amounts and types of workloads they should expect to receive in future years. Depot officials told us that the information they receive from their major commands on their future workloads are uncertain beyond the current fiscal year. Officials cited various factors that contribute to these uncertainties, such as volatility in workload requirements, changing wartime environment, budget instability, and unanticipated changes in customer orders. In addition, depot officials said that they are not involved in the sustainment portion of the life cycle management planning process for new and modified systems. No clear process exists that would enable them to have input into weapon system program managers' decisions on how and where new and modified systems will be supported and maintained in the future. Unless they are integrated in this planning process, these officials said, the depots will continue to have uncertainties about what capabilities they will need to plan for future workloads and what other resources they will need to support new and modified weapon systems. |
In 1974, DOD requested congressional approval to target its reenlistment bonus program toward critical specialties where it was experiencing staffing shortfalls. Before 1974, the military had provided a $2,000 reenlistment bonus for all servicemembers willing to reenlist. During that year, DOD raised concerns that the bonus program was not focused enough to meet the services’ needs. In 1974, Congress authorized the use of selective bonuses, which gave the services flexibility to adjust the bonuses paid to reenlistees to aid in staffing the most hard-to-fill critical specialties. Overall, the Selective Reenlistment Bonus Program is intended to increase reenlistments in specialties deemed critical by the Secretary of Defense. To implement the Selective Reenlistment Bonus Program, DOD issued a directive in 1985, updated in 1996, which assigns specific responsibilities for administering the program to the Office of the Secretary of Defense and to the services’ Secretaries. Under this directive, the Assistant Secretary of Defense for Force Management Policy, under the Under Secretary of Defense for Personnel and Readiness, is responsible for establishing procedures for administering the selective reenlistment bonus program. Specifically, the Assistant Secretary of Defense for Force Management Policy is responsible for establishing (1) criteria for designating military specialties that qualify for bonuses, (2) criteria for individual members’ eligibility for bonuses, and (3) reporting and data requirements for the annual review and evaluation of programs as well as individual services’ requests for military skill designations. In addition, according to the DOD directive, the Assistant Secretary of Defense for Force Management Policy is responsible for annually reviewing and evaluating the services’ enlisted personnel bonus programs in conjunction with the annual budget cycle. These reviews are to include an assessment of the criteria used for designating critical military specialties. As a result of these reviews, the Office of the Secretary of Defense is to make the revisions needed to attain specific policy objectives. At the same time that DOD issued its 1985 directive, it issued an instruction providing the services with guidance for managing their programs. However, this instruction was canceled in 1995, and replacement guidance has not been issued, although the updated overarching directive remains in effect. DOD did not issue replacement guidance because of administrative and legal questions that have only recently been resolved, thus clearing the way for reissuance of the guidance. The canceled instruction was to require the services to provide a balanced evaluation of five factors in identifying critical specialties: a consideration of (1) serious understaffing in adjacent years, (2) persistent shortages in total career staffing, (3) high replacement costs, (4) the arduousness or unattractiveness of the work, and (5) whether the specialty is essential to the accomplishment of defense missions. In addition, the instruction was to require that a reasonable prospect must exist for enough improvement in the occupation to justify the cost of providing the bonus. The instruction was also to require the services to provide DOD with reports on the status of their programs and on the status of the specialties included in their programs. The Selective Reenlistment Bonus Program has experienced substantial cost growth, as shown in figure 1. DOD’s budget for the Selective Reenlistment Bonus Program has more than tripled in recent years—from $235 million in fiscal year 1997 to an estimated $789 million in fiscal 2002. The Air Force’s reenlistment bonus budget increased proportionately more than the other services—from $26 million in fiscal year 1997 to over an estimated $258 million in fiscal 2002. DOD’s Selective Reenlistment Bonus Program currently allows the services to pay reenlistment bonuses of up to $60,000, though the services have set different maximums. The service secretaries designate which specialties are eligible to receive bonuses. (See app. II for more discussion of bonus determinations.) Total bonus amounts are determined by multiplying (1) the service member’s current monthly basic pay by (2) the number of additional years of obligated service and by (3) a bonus multiple that can range from 0.5 to 15. The bonus multiples are determined by each service for all specialties they deem critical. For example, an enlistee who earns $24,000 per year and reenlists for 4 years in an occupation with a multiple of 4 would receive a reenlistment bonus of $32,000. This amount is calculated by multiplying the monthly basic pay of $2,000 by the number of reenlistment years (4) and by the multiple (4). The bonus multiples are determined by each service for all eligible specialties, and the occupations that they deem most critical or hardest to fill would generally receive higher multiples. Navy officials told us that they also consider alternative wages that certain specialties can obtain outside of the military when determining the size of the bonus multiplier for a critical specialty. Each of the services has established its own guidance for implementing its selective reenlistment bonus program. This guidance varies by service. Generally, the services’ guidance establishes eligibility criteria for servicemembers and in some cases also defines criteria for selecting specialties for inclusion in the program. The Navy and Marine Corps have adopted all the original criteria that were established by DOD’s 1985 program instruction. The Air Force updated its program guidance in 1998, which only partially reflected DOD’s original program instruction. This instruction includes the criteria for individual servicemembers’ eligibility as well as guidance for the selection of specialties. The Army has established guidance for individual servicemembers’ participation in the program but not specific guidance for determining which specialties should be included in the program. In addition to establishing their own guidance for selecting individuals and occupations to include in the program, the services have also defined some other program characteristics, which also differ between the services as shown in table 1 for fiscal year 2001. While congressional authorization allows the maximum amount of a bonus to be $60,000 and the maximum multiple to be 15, the services determine their own limit. In fiscal year 2001, the maximum bonus ranged from $35,000 to $60,000, and the services’ maximum bonus multiple ranged from 5.0 to 8.0. The table also displays the minimum and maximum reenlistment periods. The Army, Navy, and Air Force pay bonuses to reenlistees with an initial payment and equal annual installments over the reenlistment period. The initial payment is made at the time of the reenlistment or when the reenlistment period begins and is equal to 50 percent of the total bonus. The remaining 50 percent is paid in equal annual payments over the term of the reenlistment, and the payments are called “anniversary payments.” In the example above, the initial 50-percent payment would be $16,000, and the anniversary payments would be $4,000 for 4 years. Starting in fiscal year 2001, the Marine Corps began paying the entire bonus in one lump-sum payment at the beginning of the reenlistment period. It is too early to determine what effect this change will have on the operation of the Marine Corps’ selective reenlistment bonus program. For fiscal years 1997-2001, some services did not consistently apply all the criteria they had established to select which specialties they include in the reenlistment program. By not doing so, they broadened the number of eligible specialties and reenlistees who received bonuses. While achieving higher reenlistments, the services have not managed their programs to stay within their congressionally appropriated budgets. As a result, the services spent more on their program than was appropriated in each of fiscal years 1997-2001. Some services are not consistently using all of the criteria they have established to select critical specialties for the Selective Reenlistment Bonus Program. The Navy and Marine Corps have adopted the original DOD instruction (see background, p. 6), which requires a balanced application of all five criteria to identify critical specialties. Marine Corps officials told us that they utilize all of these criteria in selecting specialties eligible for a bonus. However, the Navy uses only the following four criteria when identifying specialties for inclusion in the program: (1) severe undermanning, (2) severe career undermanning, (3) high training and replacement costs, and (4) skills essential to accomplishing the defense mission. According to Navy officials, any one of these criteria qualifies a specialty for inclusion in the program. According to DOD’s only review of the selective reenlistment bonus program, this is not appropriate, since a particular skill could be a good candidate on the basis of several criteria, but inappropriate on the basis of one. While the Air Force considers numerous factors when making determinations about which occupations to include in its program, it does not prioritize its occupations as required by Air Force instructions. As a result, the bonuses paid may not reflect the importance of the specialty. The Air Force has adopted most of the original DOD criteria; however, it does not require a balanced application of those criteria. The Air Force’s criteria include (1) shortfalls in meeting current and projected reenlistment objectives (reenlistment rates and the size of specific-year groups, as well as adjacent-year groups), (2) shortages in current and projected noncommissioned officer manning, (3) high training investment and replacement cost for a skill, (4) expected improvement in retention resulting from designation as a selective reenlistment bonus skill, and (5) priority of the skill. An Air Force review board considers the criteria, and then a professional judgment is made on whether to include a skill in the program. The Army has not adopted all of DOD’s original criteria. The Army has established regulations governing the eligibility of individuals for inclusion in the program, but it has not established regulations for selecting occupations to include in its program. As a result, the specialties that the Army selects for bonuses may not be critical. According to Army officials, the criteria can fluctuate depending on their current needs. During fiscal year 2002, the Army’s criteria for selection of critical specialties included (1) budget constraints, (2) current and projected strengths, (3) retention rates, (4) training constraints, (5) replacement costs, (6) priority military occupational specialties, and (7) shortages within mid-grade levels. The Army uses understaffing as the primary criterion for designating occupations as critical and eligible for bonuses. The services, with the exception of the Marine Corps, have not been applying all of their criteria for selecting specialties to include in their Selective Reenlistment Bonus Programs. This has led to an increase in the number of specialties that the services made eligible for reenlistment bonuses during fiscal years 1997-2001. As the number of specialties eligible for bonuses grew, so did the number of reenlistments receiving bonuses from each service. (See table 2.) In fiscal year 2001, the Navy awarded bonuses to the smallest percentage of specialties of the services, but those awards accounted for the largest number of bonus recipients. The Air Force awarded bonuses to approximately 80 percent of its specialties, which were paid to 42 percent of its reenlistees. Between 1997 and 2001 all of the services increased the number of specialties for which they offered reenlistment bonuses. As a result, there was an increase in the total number of reenlistees who got bonuses. For all the services combined, the total number of reenlistees receiving bonuses more than doubled—from approximately 23,000 in fiscal year 1997 to almost 59,000 in fiscal 2001. Along with this growth in the number of specialties and reenlistees receiving bonuses has been an increase in the average bonus paid—from approximately $5,500 in fiscal year 1997 to over $8,000 in fiscal 2001. In constant 2001 dollars, the average initial bonus payment has grown from approximately $5,900 in fiscal year 1997 to over $8,000 in fiscal 2001. The Navy had the greatest increase in average initial payments—from over $7,200 in fiscal year 1997 to almost $11,000 in fiscal 2001. The Air Force average initial payment also increased—from approximately $3,900 in fiscal year 1997 to $7,100 in fiscal 2001. Unlike the other services, the Army’s average bonus fell by $500 between fiscal years 1997 and 2001. From fiscal year 1998 through fiscal 2001, none of the services’ Selective Reenlistment Bonus Programs stayed within their appropriated program budgets. Rather, with the exception of the Marine Corps, the services reprogrammed or realigned funds from other programs within the enlisted personnel budget to make more bonus payments than they were originally funded to pay. The services are able to do this under their budget authority. However, they are restricted from shifting funding amounts of over $10 million from other budget authority lines, such as from officer pay programs, without seeking congressional prior approval for reprogramming of resources. Overall, we found that the Army, Navy, and Air Force did not manage their programs to stay within their budgets appropriated by Congress. Rather, with the exception of the Marine Corps, the services have allowed their program to continue running the entire fiscal year and have exceeded their budget appropriation during the past several years. (See app. III for more detail on budget requests and actual initial bonus payments.) Even though Congress provided $165 million in additional funding during fiscal years 1997-2001, the three services spent approximately $240 million dollars more on initial bonus payments than Congress had appropriated. During fiscal years 1997-2001, the Navy exceeded its appropriated budget by more than $121 million; the Air Force, by $70 million; and the Army, by about $49 million. However, these services pay 50 percent of their bonuses up front as initial payments and pay the remaining 50 percent in annual installments over the reenlistment period. Consequently, they have to pay an additional $240 million in anniversary payments in future years. This means that the total cost of the over expenditures on initial payments made during fiscal years 1997-2001 could be as much as $480 million. Although the Army, Navy, and Air Force have periodically reviewed their programs during the fiscal year, they have made few adjustments to the program to stay within their appropriated budgets. With the exception of the Marine Corps, the services either do not establish goals for improvement in critical specialties or manage their programs to stay within the goals they have set. For example, while the Navy does establish retention goals for specialties included in the program, it does not prioritize its specialties and modify the bonuses, as needed, to stay within those goals. For example, in fiscal year 2001, the Navy exceeded its goals in 75 specialties by more than 110 percent. By exceeding its goals in some occupations, the Navy may be neglecting other specialties that could utilize increased bonuses to improve retention. For example, we found 64 specialties that were below 90 percent of the retention goal for fiscal year 2001. In 50 cases, the Navy reduced the multiples (12) or made no change (38) for these specialties from fiscal year 2000. During fiscal year 2002, the services experienced a strong recruiting and retention year, which, according to service officials, caused the Army and Navy to scale back or close their programs. The Army expected to exceed its fiscal year 2002 program budget estimates by over $45 million and closed its program 45 days prior to the end of the fiscal year. These actions were taken after both services had exceeded their budgets for fiscal year 2002. The Navy lowered the bonus amounts paid during fiscal year 2002 after acknowledging that it would exceed its fiscal year 2002 appropriation if these actions were not taken. Starting in fiscal year 2001, the Marine Corps instituted a plan to close its program when the budget limit was met. The Marine Corps closed its program in July 2002, since the appropriated budget was met at that time. According to some service officials, three key factors combined to cause the services to increasingly rely on the Selective Reenlistment Bonus Program to staff critical specialties during fiscal years 1997-2001. These factors were (1) the downsizing of the U.S. military forces during the 1990s, (2) a decline in recruiting in the early to mid-1990s, and (3) fewer reenlistments during the late 1990s. The combination of these factors, according to service officials, has contributed to growth in costs during fiscal years 1997-2002. According to the Congressional Research Service, the shortfalls in recruiting and retention in fiscal years 1998-1999 were the first since fiscal 1979. Regarding downsizing, the U.S. military substantially reduced its number of active-duty military personnel after the end of the Cold War. During fiscal years 1990-1999, the number of active-duty enlisted personnel declined from 1.7 million to 1.2 million—approximately 34 percent. Part of this reduction in the military force was due to a reduction in the services’ recruiting goals. For example, DOD’s recruiting goals decreased consistently from 229,172 in fiscal year 1990 to as low as 174,806 in fiscal 1995 before increasing again in the years following. One of the intended purposes of reducing these goals was the desire to arrive at a smaller force by decreasing new enlistments instead of forcing more experienced personnel to leave the military. However, according to DOD officials, fewer new enlistments in the mid-1990s produced too few enlisted personnel to meet the services’ needs for mid-level personnel (those with 5-10 years of experience) in the late 1990s. The services had varying degrees of success in achieving their higher recruiting goals in the late 1990s. For example, in fiscal year 1999, the Army failed to meet its goal—recruiting only 95 percent of its target. The Navy and the Army also failed to meet their recruiting goals in fiscal year 1998, recruiting 88 and 92 percent, respectively. These failures to achieve recruiting goals were perceived by the services as a serious problem because of its potential impact on the force structure. The services also experienced retention problems that coincided with the recruiting shortfalls. While the Army achieved its reenlistment goals for first-term and mid-career enlisted personnel, shortfalls occurred in the career reenlistment term. For example, during fiscal years 1996-1998, the Army’s reenlistment rates for the eligible population decreased from 65.7 to 60.1 percent. First-term reenlistment rates for the Navy decreased consistently during 1996-1999 from a reenlistment rate of 32.9 percent of the eligible population to 28.2 percent. Also, during fiscal years 1998-2000, the Air Force did not meet its aggregate retention goal of 55 percent for first-term personnel, getting 54, 49, and 52 percent, respectively. DOD canceled the instruction containing criteria for the Selective Reenlistment Bonus Program in 1995 and has not replaced it. According to DOD officials, there were administrative impediments that involved the recoupment of reenlistment bonuses from some servicemembers who leave the military because of disciplinary actions initiated by DOD. These administrative impediments were resolved in fiscal year 2002 and have cleared the way for issuance of a new DOD instruction. Also, DOD has not provided adequate oversight nor conducted the reviews over the program that its directive, which is still in effect, requires. In the absence of DOD criteria and oversight, the services have not been held accountable for using any criteria to designate critical specialties or to report to DOD how they select the specialties. As a result, the services have expanded their programs to include specialties that may not be critical to their missions. In addition, the DOD comptroller conducts only limited reviews of the budgets the services submit for the program each year. As a result, DOD has not assured that the increases in the services’ Selective Reenlistment Bonus Programs budgets each year were justified. The Office of the Secretary of Defense has not followed the DOD directive requiring it to establish guidance for the services to use in administering the reenlistment bonus programs. According to the directive, DOD is responsible for providing the services with guidance to ensure proper program administration through an instruction on (1) establishing criteria for designating the military skills eligible for bonuses, (2) determining individual members’ eligibility for awards, and (3) establishing reporting and data requirements for the review and evaluation of annual programs and individual requests for military skill designations. Without this instruction, the services have not had clear direction on how to manage their programs. DOD is currently updating the instruction, and officials stated that they intend to issue it sometime during 2003. The Office of the Secretary of Defense has provided only limited oversight, which has resulted in little feedback to the services on the administration of their selective reenlistment bonus programs. DOD has conducted only one comprehensive review of the program (in 1991) to determine the best use of resources. Currently, although the Office of the Secretary of Defense is responsible for monitoring and conducting ongoing oversight, it does not conduct detailed annual reviews of the program as required by its directive. Furthermore, although DOD’s Comptroller conducts periodic program reviews, these reviews are limited to the services’ budget submissions and their justification. In addition, the comptroller’s recently initiated recruiting and retention hearings devote only a small part of the meetings to reviewing each service’s Selective Reenlistment Bonus Program. The Office of the Secretary of Defense has not complied with its directive that requires that the office conduct annual Selective Reenlistment Bonus Program reviews. These reviews are intended to assess the services’ programs in conjunction with the annual program budget reviews and to result in recommendations to the Secretary of Defense for measures required to attain the most efficient use of resources. DOD acknowledged that these reviews have not been conducted in recent years, but it is currently taking steps to restart reviews of the services’ programs and told us that it plans to complete these reviews by March 2003. In addition, DOD is required by directive to annually review the criteria used to designate eligible military skills and to make any changes needed to attain specific policy objectives. However, DOD has not conducted these annual reviews. Moreover, it has not reviewed the services’ processes for establishing their reenlistment bonus programs. The last comprehensive review of the program was conducted in 1991. However, the services were not required to respond to the findings of the 1991 review and consequently did not take any action on the findings, and DOD has not conducted any subsequent reviews of this nature. The 1991 review found that the program was generally well managed. However, the review raised concerns about the general nature of the guidance provided to the services and raised questions about 34 percent of the services’ specialties eligible for bonuses. In addition, the review noted that a “balanced” application of all the criteria contained in DOD’s instruction was needed to ensure that only critical specialties were selected. The report specifically noted that staffing shortfalls alone were not sufficient criteria to qualify an occupation for inclusion in the program. While chronically undermanned, the report noted that musicians would not be considered critical for the fulfillment of defense missions and thus would not receive a bonus. The report noted that none of the services, at that time, provided selective reenlistment bonuses for musicians. However, the Army is currently offering bonuses to some musician specialties on the basis of chronic understaffing in those areas. In a 1996 report, we raised similar concerns regarding the management and oversight that DOD was providing the program with in, among other aspects, determining which skill categories should receive bonuses. In that report, we noted that the Office of the Secretary of Defense had not provided adequate guidance for and oversight of the Selective Reenlistment Bonus Program. Additionally, we noted that its guidance to the services for determining which specialties categories should receive bonuses was too general in nature. As a result, each service used a different procedure for identifying which specialty categories were to receive retention bonuses. With regard to oversight, while DOD guidance required detailed annual reviews of the specialty categories that the services planned to include in their programs, DOD had not conducted these reviews. Our report recommended that DOD (1) provide more explicit guidance regarding the determination of shortage categories and eligibility for bonuses and require the services to establish and document more specific criteria for determining which skills will receive bonuses and (2) monitor the services’ adherence to this guidance. DOD took no action on our recommendations, and the program has continued to grow unconstrained since. DOD’s Comptroller conducts limited annual reviews of the services’ program budget submissions. According to the analysts responsible for these reviews, they review high-level program summaries that do not provide insights into the details of how the program is run. They essentially review the services’ budget estimates and a small sampling of specialties that the services represent as their top retention-critical specialties. We found that these reviews were limited because of the small number of skills considered by the review, and we questioned some specialties that the services included in their top retention-critical specialties. We found some specialties that had appeared on the services’ lists for several years were not receiving the highest bonuses. For example, the Navy listed the service occupation of Cryptologic Technician (Collection) as being critically undermanned during fiscal years 1997-2002. However, the bonus multiplier for this specialty has not been higher than 4.5 during this time frame and was lowered to 4.0 in fiscal year 2001. During that year the Navy retained only 160, or 82 percent, of the goal of 194 (out of 431 potential retainees) Cryptologic Technicians. Because of the general nature of its reviews, the DOD Comptroller’s analysts did not identify inaccuracies in the services’ budget estimates because of the general nature of their reviews. For example, in two instances, the services overestimated the amount of their anniversary payments and used the additional funds to make initial payments. For example, in fiscal year 2001, the Army overestimated its anniversary payments by $9 million. In fiscal year 2002, the Air Force overestimated its anniversary payments by approximately $17 million. According to service officials, these estimates are easy to calculate; since they are obligations incurred from the previous years’ bonus programs and are known amounts. These unbudgeted initial payments also resulted in an additional obligation of $26 million that must be paid in future years, since these services pay 50 percent of their bonuses up front and must pay the remaining 50 percent over the reenlistment period (a total of $52 million). The Selective Reenlistment Bonus Program was intended to help the services meet short-term retention problems in selected critical specialties. Instead, the services have broadened the number of specialties included in the program by not using all of their criteria to selectively target critical specialties. As a result, the number of eligible specialties and the corresponding number of enlisted personnel included in the program have increased significantly. The growth in the number of eligible specialties and enlisted personnel who receive reenlistment bonuses may well continue if actions are not taken to constrain this expansion. Actions that may constrain the program’s growth include requiring the services’ to adhere to criteria they have already established for selecting critical specialties and by DOD’s issuing an instruction governing the operation of the program. While several factors influenced the program’s growth in recent years, it is likely that the impact of these factors could have been mitigated if DOD had replaced the canceled instruction and had exercised appropriate oversight. Without an instruction to guide the services, DOD cannot be sure that the program is being implemented as intended and that adequate internal controls are in place to govern the operations of the services’ programs. Without clear DOD guidance and oversight, managing the program and justifying its growth are difficult. In 1996, we raised concerns about the lack of oversight that DOD was providing the program in, among other aspects, determining which skill categories should receive bonuses. The Office of the Secretary of Defense could have exercised more effective controls over the services’ management of their programs, had it followed our recommendations to provide appropriate guidance and more active oversight. In the absence of a DOD instruction governing the services’ implementation of the program, we recommend that the Secretary of Defense require the services to do the following: Apply all the criteria they have established for selecting critical specialties under their Selective Reenlistment Bonus Programs. In the case of the Army, criteria should be established for selecting critical specialties. Manage their programs to stay within their appropriations or, if circumstances require, provide the Office of the Secretary of Defense with adequate justification for increased expenditures over appropriated amounts. To improve DOD’s oversight and the services’ management of the Selective Reenlistment Bonus Program, we recommend that the Secretary of Defense require that the Undersecretary for Personnel and Readiness issue an instruction that provides the services with guidance for administering and selecting specialties for inclusion in their programs and conduct annual reviews of the Selective Reenlistment Bonus Program as required by DOD’s directive. DOD provided official comments on a draft of this report. DOD did not agree with the report’s conclusion that the Department cannot be sure that the Selective Reenlistment Bonus Programs are being implemented as intended. While DOD stated that program controls are in place to ensure a reasonable balance between oversight and execution, we found that fundamental management controls were not in place. For example, DOD canceled the key instruction that provides the services with essential guidance for administering their programs 7 years ago and has not replaced it. Also, the comptroller’s budget reviews of the services’ programs were limited and, in some cases, were not effectively performed. Furthermore, these budget reviews were never intended to provide detailed programmatic oversight. Office of the Secretary of Defense officials told us that detailed programmatic reviews required by DOD’s directive had not been conducted since 1991. Consequently, without these essential management controls, it is not clear how the Department can be sure that the program is being implemented as intended. DOD concurred or partially concurred with our recommendations. Our evaluation of DOD’s comments on these recommendations follows: DOD concurred with our recommendation to issue an instruction that provides the services with guidance for administering their programs. The Department stated that it has drafted and is staffing a new DOD Instruction 1304.22 that will govern the procedures for administering the program. DOD partially concurred with our recommendation that the Secretary of Defense require the services to apply all the criteria they have established for selecting specialties under their Selective Reenlistment Bonus Programs. DOD stated that its criteria are sound and that the services’ processes are balanced. As we noted in our report, the Department’s instruction that was canceled required a balanced application of five criteria. We found that the services do not always apply their criteria in a balanced fashion. DOD also stated that, although the Army does not have a regulation governing its Selective Reenlistment Bonus Program, the criteria it uses are sound. However, we found that the Army’s criteria are not clearly defined. Therefore, it is difficult to determine whether the criteria are being applied in a balanced fashion. Not using all the criteria allows the number of specialties included in the program to expand with a corresponding increase in reenlistments. For example, the number of Army reenlistments with bonuses more than doubled from approximately 7,000 in fiscal year 1997 to over 17,000 in fiscal 2001. DOD also stated that it does not use the program to address aggregate end-strength goals. We did not intend to infer that the program is being used to meet aggregate end-strength goals. We removed the discussion of the services use of the program to meet aggregate end-strength goals from the report. DOD partially concurred with our recommendation that the services manage their programs to stay within their appropriations or, if circumstances require, provide the Office of the Secretary of Defense with adequate justification for increased expenditures over appropriated amounts. DOD stated that, in general, it did not concur with an imposition of new centralized control over the services’ budget execution. Furthermore, DOD stated its concern that the services need to be able to respond to changes in external factors during the 2 years between budget submission and program execution. However, as our report points out, DOD has repeatedly exceeded its appropriations during the last 5 fiscal years. For example, during fiscal years 1997-2001, the services spent over $240 million above their appropriations. Better program oversight and management by DOD would have required the services to justify exceeding their appropriations during this 5-year period. We do not believe that more accountability for budget execution will diminish the services’ ability to respond to changing operational needs. DOD partially concurred with our recommendation that the Office of the Secretary of Defense conduct annual reviews of the Selective Reenlistment Bonus Program. DOD cited the annual reviews conducted by DOD’s Comptroller as an example of routine program reviews. However, as we have previously noted, these reviews are limited to the services’ budget submissions and justification. And, only a small sample of occupations are included in the budget submission. DOD also stated that part of the budget process includes reviews by the Office of Management and Budget. However, Office of Management and Budget officials told us that their reviews are limited and do not constitute a detailed assessment of the services’ programs. DOD’s response also stated that its annual defense report will provide a listing of DOD-critical skills and other pertinent information. However, this listing will not represent a detailed programmatic review. DOD’s Annual Defense Report is an overarching representation of all DOD programs and does not permit the level of detailed information required to fully assess the Selective Reenlistment Bonus Program. We continue to believe that our recommendations have merit and have made no adjustments to them. During the course of our review, the House Appropriations Committee directed the Secretary of Defense to report on several aspects of the Selective Reenlistment Bonus Program and to provide the Committee with that report by March 31, 2003. It also directed us to review and assess that report and report back to the Committee by June 1, 2003. DOD’s comments appear in their entirety in appendix IV. DOD also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Defense and the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will send copies to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-5559 if you or your staff have any questions regarding this report. Key contributors to this report were Donald Snyder, Kurt Burgeson, James Driggins, Marjorie Pratt, Brian James, Jane Hunt, Earl Williams, and Maria-Alaina Rambus. To determine the extent to which the services followed their Selective Reenlistment Bonus Program’s management criteria, we reviewed their criteria and other documentation of overall program development and execution. We examined the retention rates as reported by the services as well as prepared program expenditure and growth trends. We then reviewed the services’ reported contributing program growth factors: (1) budgetary changes, (2) the effects of downsizing initiated in the early 1990s, and (3) changes in the recruiting and retention climate during the 1990s. We also attended one of the Department of Defense (DOD) Comptroller’s quarterly services’ recruiting and retention briefings in addition to reviewing materials from previous briefings. To determine how the selective reenlistment bonus program has been used to address retention problems in specialties of most concern to the services, we reviewed the critical specialties they identified in the selective reenlistment bonus sections of their budget justification books.Since the Air Force did not report its top critical retention occupations in its justification books, we examined the critical occupations listed by the Army, Navy, and Marine Corps for fiscal year 1998 (fiscal year 1999 for the Army) to 2003. More specifically, we identified specialties that appeared on the services’ lists for 3 or more years. We then reviewed the history of the bonus multiples that the services applied to these occupations to determine how they were used to address the retention problems in those occupations. To identify trends in the programs budget, we compared the services’ budget requests with their actual budget expenditures for fiscal years 1997–2002. As part of this trend analysis, we reviewed both the initial and anniversary payments made during each of the fiscal years and projected them into the future. We also reviewed congressional actions that took place during this time period. We also conducted trend analyses of the number of reenlistees receiving bonuses, changes in the occupations eligible for bonuses, and changes in the average bonus amounts. We were unable to measure the impact of pay increases on the average bonus amounts during this time frame because the multiples used to calculate the bonuses varied from year to year. To assess whether DOD provided adequate program guidance and oversight, we reviewed legislation and DOD directives and instructions governing the program. In addition, we reviewed these materials and evaluated the extent to which the program was meeting its intended purpose as defined by Congress and DOD. We obtained and reviewed the guidance established by the services for implementing their programs. We reviewed the criteria contained within the services’ guidance and assessed their adherence to it. We interviewed DOD and Selective Reenlistment Bonus Program officials and reviewed their program oversight and guidance policies and procedures. These interviews were conducted with officials in the Office of the Under Secretary of Defense (Comptroller); Assistant Secretary of Defense (Force Management Policy); Deputy Chief of Staff for Personnel—Army (Professional Development); Deputy Chief of Staff, Personnel—Air Force (Skills Management); Deputy Chief of Naval Operations (Manpower and Personnel); Deputy Chief of Staff for Manpower and Reserve Affairs—Marine Corps; and Deputy Chief of Staff for Programs and Resources—Marine Corps. We also met with officials from the Office of Management and Budget. We also reviewed our own published report, and data from the Office of the Secretary of Defense, the Congressional Research Service, and RAND. We also obtained data on bonus levels from DOD and the services, the numbers of personnel reenlisting overall and within the program, critical skills, and retention and recruitment data. We reviewed, but did not verify, the accuracy of the data provided by DOD and the services. The services use DOD’s Selective Reenlistment Bonus Program to help meet their staffing requirements. The selective reenlistment bonus is designed to offer an attractive reenlistment or extension incentive to improve staffing in critical military specialties. The active duty individuals in the critical military specialties who reenlist or extend their enlistments are to serve for the full period of the reenlistment or extension contract. Under the Selective Reenlistment Bonus Program, there are two methods of bonus payments: (1) initial and/or (2) anniversary. The initial payment is the first installment paid to the individual when the individual reenlists or begins serving the extension. The initial payment is either 50 percent of the total bonus or 100 percent of the total bonus, called the “lump-sum payment.” Any remaining bonus is paid in equal annual installments on the anniversary date for the remainder of the reenlistment contract period. The Office of the Secretary of Defense has established three eligibility zones for the payment of selective reenlistment bonuses. These zones are defined in terms of years of active-duty service. Zone A includes reenlistments falling from 17 months to 6 years of active duty; zone B, from 6 to 10 years; and zone C, from 10 to 14 years. The Selected Reenlistment Bonus multiples are calculated for each of these three zones. (See table 3.) Service members may receive only one selective reenlistment bonus within any one zone and must reenlist or extend their reenlistments for at least 3 years if they accept a bonus. This appendix describes the growth of the program in constant dollars and growth in the services’ initial payments during fiscal years 1997-2002. The services’ budgets for the selective reenlistment bonus program have grown during fiscal years 1990-2001. During the military drawdown in the early to mid-1990’s, the cost of the program declined. During the period from fiscal year 1996-2002, the budgets of the services’ programs grew from $243 million to an estimated $790 million in constant (inflation adjusted) dollars. Figure 2 displays the cost of the retention bonus program in constant 2002 dollars during fiscal years 1990-2002. The Army’s Selective Reenlistment Bonus Program budget for initial payments grew from $30 million to $72 million during fiscal years 1997- 2001. (See fig. 3.) During fiscal years 1997-2001, the Army exceeded its appropriated budget by approximately $49 million after taking into account an additional $64 million that Congress added to the Army’s initial payments budget over this period. The Navy, which recently has had the largest Selective Reenlistment Bonus Program, also experienced budget growth in its initial payments from $78 million to $234 million during fiscal years 1997-2001. (See fig. 4.) During fiscal years 1997-2001, the Navy exceeded its appropriated budget by more than $121 million after taking into account an additional $44 million that Congress added to the Navy’s initial payments budget over this period. The Marine Corps, from fiscal year 1997 through 2002, had the smallest Selective Reenlistment Bonus Program. The Marine Corps’ program is also unique because in fiscal year 2001, it began making lump-sum bonus payments. This resulted in a significant increase in the program’s cost for that year. During fiscal years 1997-2000, the Marine Corps’ program budget for initial payments grew annually from $8 million to $25 million. However, the transition to lump-sum payments in fiscal year 2001 caused the Marine Corps’ budget for new payments to exceed $46 million. (See fig. 5.) During fiscal years 1997-2001, the Air Force’s reenlistment bonus budget for initial payments grew from $13 million to $123 million, an 846-percent increase. (See fig. 6.) During fiscal years 1997-2001, the Air Force exceeded its appropriated budget by more than $70 million after taking into account an additional $57 million that Congress added to the Air Force’s initial payments budget over this period. | Because of the recent growth in DOD's Selective Reenlistment Bonus Program, the House Appropriations Committee asked GAO to determine (1) the extent to which the services have followed their criteria for managing their programs and (2) whether DOD has provided adequate guidance for and oversight of the program. The Navy and Air Force have not used all of the criteria they have established for selecting critical military specialties eligible for bonuses under their Selective Reenlistment Bonus Programs. The Army's guidance does not include specific criteria for selecting critical specialties. Since these services have not used all of their criteria, the number of eligible specialties and the number of enlisted personnel who receive bonuses have expanded. Moreover, the services did not manage their programs to stay within their budgets appropriated by Congress. The Department of Defense's (DOD) budget for the Selective Reenlistment Bonus Program has more than tripled in recent years--from $235 million in fiscal year 1997 to an estimated $789 million in fiscal year 2002. DOD has not provided adequate guidance for and oversight of its Selective Reenlistment Bonus Program. DOD canceled an instruction that established criteria for selecting specialties for the program. Without this instruction, DOD cannot be sure that the program is being implemented as intended. Also, DOD has not reviewed the services' processes for selecting critical specialties or for establishing their corresponding bonus levels, despite requirements to do so annually. Thus, DOD has not ensured that the services are implementing their programs appropriately to help improve short-term retention in critical military specialties |
Federal regulations and EEOC policy require federal agencies to report certain EEO complaint-related data annually to EEOC. Agencies report these data on EEOC form 462, Annual Federal Equal Employment Opportunity Statistical Report of Discrimination Complaints. EEOC compiles the data from the agencies for publication in the annual Federal Sector Report on EEO Complaints Processing and Appeals. According to EEOC Management Directive 110, agencies should make every effort to ensure accurate recordkeeping and reporting of these data. In our recent report, we said that reliable data are important to program managers, decisionmakers, and EEOC in identifying the nature and extent of workplace conflicts. We analyzed the data contained in EEOC’s annual federal sector reports to prepare our reports dealing with employment discrimination complaint trends. Because the Postal Service accounts for a large share of complaints filed by federal employees with their agencies, we analyzed forms 462 submitted by the Service for fiscal year 1991 through fiscal year 1998, as well as other complaint data provided at our request. Because our studies generally focused on trends in the number and age of unresolved complaints in inventory, the number of complaints filed, the bases and issues cited in complaints, and complaint processing times, we did not examine the full scope of data reported on form 462. Although we did not examine the Service’s controls for ensuring accurate recordkeeping and reporting or validate the data the Service reported, we examined the data for obvious inconsistencies or irregularities. We requested comments on a draft of this report from the Postmaster General. The Postal Service’s oral comments are discussed near the end of this letter. We performed our work in July and August 1999 in accordance with generally accepted government auditing standards. The most significant error that we identified in Postal Service data involved the number of race-based complaints filed by white postal workers. EEOC requires agencies to report the bases (e.g., race, sex, disability) for complaints that employees file. For fiscal year 1996, the Postal Service had reported that 9,044 (about 68 percent) of the 13,252 complaints filed contained allegations by white postal workers of race discrimination. For fiscal year 1997, the Service had reported that 10,040 (70 percent) of the 14,326 complaints filed contained such allegations. These figures represented significant increases over the figures reported for previous fiscal years. For example, in fiscal year 1995, the Service reported to EEOC that 1,534 of the complaints filed contained allegations by white postal workers of race discrimination. In fiscal year 1994, the figure reported was 2,688. We questioned Postal Service officials about the sudden increase in the number of complaints containing allegations by white postal workers of race discrimination. The officials said that they also had been concerned about these data, and had discussed the data with EEOC officials. After we raised this issue, the officials intensified their efforts to identify the true magnitude and source of the increase and subsequently found that a computer programming error had resulted in a significant overcounting of these complaints. They said that the corrected figures were 1,505 for fiscal year 1996 (or 11.4 percent of the 13,252 complaints filed) and 1,654 for fiscal year 1997 (or 11.5 percent of the 14,326 complaints filed). They also provided these figures to EEOC. In explaining how the error occurred, the officials said that each automated case record in the complaint information system contains a data field for race, which is to be filled in with a code for the applicable racial category when an employee alleges racial discrimination. If an employee alleges discrimination on a basis or bases other than race, this data field is to remain blank. According to the officials, the faulty computer program counted each blank racial data field as indicating an allegation by a white employee of racial discrimination. These results were then tallied with complaints in which the data field was properly coded as an allegation by a white employee of racial discrimination. The officials advised us that the programming error had been corrected. Although we did not examine the computer program, our review of the data reported on the Postal Service’s form 462 for fiscal year 1998 appeared to confirm that the correction had been made. Other errors that we found in data that the Service reported on form 462 related to the age of cases in the inventory of unresolved complaints. EEOC requires agencies to report statistics on the length of time that cases have been in the agencies’ inventories of unresolved complaints, from the date of complaint filing. These data are broken out by each stage of the complaint process—acceptance/dismissal, investigation, hearing, and final decision. We questioned figures for fiscal year 1997 about the age of (1) cases pending acceptance/dismissal, because the reported total number of days such cases had been in inventory seemed unusually high, and (2) cases pending a hearing before an EEOC administrative judge, because the reported average age of such cases seemed unusually low. After we brought the questionable figures to the attention of the Postal Service EEO Compliance and Appeals Manager, he provided corrected figures and said that the errors, like the problem with the reporting of complaint bases described previously, were due to a computer programming error. He said that the faulty computer program had been corrected. In addition, the Service provided the corrected figures to EEOC. We also found that the Postal Service has not been reporting all issues— the specific conditions or events that are the subjects of complaints—as EEOC requires. Because some complaints involve more than one basis or more than one issue, EEOC’s instructions for completing part IV of form 462 require agencies to include all bases and issues raised in complaints. While the Postal Service’s complaint information system allows more than one complaint basis (like racial and sexual discrimination) to be recorded, the system’s data field allows only one “primary” issue (like an adverse personnel action) to be recorded for each complaint, regardless of the number of issues that a complainant raises. Although this practice results in underreporting complainants’ issues to EEOC, the EEO Compliance and Appeals Manager said that the Postal Service adopted this approach to give the data more focus by identifying the primary issues driving postal workers’ complaints. This matter has not been resolved. In order to report more than one issue for each complaint, the Service would have to modify the automated complaint information system to allow for the recording of more than one issue for a complaint. However, we have reported that part IV of form 462 for reporting statistics on bases and issues is methodologically flawed and results in an overcounting of bases and issues. We have made a recommendation to EEOC that it correct this problem, and the agency said that it would address our concerns. Therefore, we believe that it would be prudent for the Postal Service to wait for EEOC to resolve this issue before modifying its data recording and reporting practices. In addition to the discrepancies already noted, we found that the Postal Service’s statistical reports to EEOC for fiscal years 1996 and 1997 did not include data for complaints involving certain categories of primary issues. The form 462, which EEOC requires agencies to complete, contains a list of issues. For its own management needs, the Service supplemented EEOC’s list with three additional categories of specific issues: (1) denial of worker’s compensation, (2) leave, and (3) other pay. However, we found that in completing part IV of EEOC form 462 for fiscal years 1996 and 1997, the Service omitted the data about complaints in which these additional issues were cited. After we brought our observations to the attention of Service officials, they provided the omitted data to EEOC. The officials explained that, for fiscal year 1998, in lieu of including data about complaints involving the three additional issues on part IV of form 462, they provided these data separately to EEOC. The EEO Compliance and Appeals Manager explained that he did not want to “force fit” the data about the three issues into one of the categories listed on the form 462, such as “other,” because the issues thereby would lose their identity and significance. He added that part IV of form 462 needs to be revised because the categories of issues listed are too broad and do not recognize emerging issues. Further, we found certain underreportings of the bases and issues cited in complaints for fiscal year 1995. After we brought the underreporting to the attention of the Postal Service officials, they provided corrected data to EEOC and us. Service officials attributed this underreporting to difficulties associated with implementing a new complaint information system in fiscal year 1995. Both Postal Service management and EEOC need complete, accurate, and reliable information to deal with EEO-related workplace conflicts. Discrepancies that we found in our limited review of the Postal Service’s EEO complaint data raised questions about the completeness, accuracy, and reliability of the reported data, particularly data generated through the automated complaint information system. All but one of the reporting problems we found and their underlying causes appear to have been corrected. However, because we examined only a limited portion of the reported data for obvious discrepancies and because the errors we identified were related to data generated by an automated complaint information system put in place in 1995, we have concerns about the completeness, accuracy, and reliability of the data that we did not examine. To help ensure that the EEO complaint data submitted to EEOC are complete, accurate, and reliable, we recommend that you review the Postal Service’s controls over the recording and reporting of these data, including evaluating the computer programs that generate data to prepare the EEOC form 462, Annual Federal Equal Employment Opportunity Statistical Report of Discrimination Complaints. We recognize that recording and reporting issues raised in complaints are matters that cannot be completely addressed until EEOC resolves the methodological flaws in part IV of form 462. In oral comments on a draft of this report made on August 20, 1999, the Postal Service Manager, EEO Compliance and Appeals, generally concurred with our observations and offered comments of a clarifying nature. In response to our recommendation that the Service’s controls over the recording and reporting of EEO complaint data to EEOC be reviewed, this official said that the Postal Service plans to adopt more comprehensive management controls to ensure that the data submitted are complete, accurate, and reliable. The official further said that these controls would involve (1) an analysis of trend data to identify anomalies and (2) an examination of data categories in which discrepancies have previously been found. He also said that complaint information system controls would be examined to determine whether they ensure that data recorded and reported are complete, accurate, and reliable. He said, however, that because the complaint information system has been certified for year 2000 compatibility and because the Service has decided not to modify any computer systems until March 2000, any modifications to improve the complaint system will not be made until then. We believe that the actions the Postal Service proposes, if carried out, will address the substance of our recommendation. We are sending copies of this report to Senators Daniel K. Akaka, Thad Cochran, Joseph I. Lieberman, and Fred Thompson and Representatives Robert E. Andrews, John A. Boehner, Dan Burton, William L. Clay, Elijah E. Cummings, Chaka Fattah, William F. Goodling, Steny H. Hoyer, Jim Kolbe, John M. McHugh, David Obey, Harold Rogers, Joe Scarborough, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Member of Senate and House Committees and Subcommittees. In addition, we will send a copy to Representative Albert R. Wynn. We will also send copies to the Honorable Ida L. Castro, Chairwoman, EEOC; the Honorable Janice R. Lachance, Director, Office of Personnel Management; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will make copies of this report available to others on request. Because this report contains a recommendation to you, you are required by 31 U.S.C. 720 to submit a written statement on actions taken on this recommendation to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. If you or your staff have any questions concerning this report, please contact me or Stephen Altman on (202) 512-8676. Other major contributors to this report were Anthony P. Lofaro, Gary V. Lawson, and Sharon T. Hogan. Michael Brostek Associate Director, Federal Management and Workforce Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed certain discrepancies in the complaint data that the Postal Service reported to the Equal Employment Opportunity Commission (EEOC) and the need for the Service to take additional steps to ensure that such data are complete, accurate, and reliable. GAO noted that: (1) in GAO's limited analyses of the data the Service reported to EEOC, GAO found errors in statistics on the underlying bases for EEO complaints and on the length of time complaints had been in inventory; (2) GAO also found that required data on the issues raised in complaint information system; (3) these discrepancies were generally linked to statistical reports generated by the Service's automated complaint information system; (4) after GAO brought these discrepancies to the attention of Postal Service staff, they promptly corrected them and appeared to correct the underlying causes for the errors, with one exception; (5) that situation need not be resolved until EEOC revises its reporting form; and (6) because GAO examined only a portion of the reported data for obvious discrepancies and because the errors GAO identified were related to data generated by an automated complaint information system put in place in 1995, GAO has concerns about the completeness, accuracy, and reliability of the data that GAO did not examined. |
The No Child Left Behind Act of 2001, which reauthorized the Elementary and Secondary Education Act (ESEA), is designed to improve the education of all students and the quality of teachers. NCLBA requires that all teachers of “core academic subjects”—defined to mean English, reading or language arts, mathematics, science, foreign languages, civics and government, economics, arts, history, and geography—be “highly qualified.” To be highly qualified, teachers (1) must have at least a bachelor’s degree, (2) be certified to teach by their state, and (3) demonstrate subject matter competency in each core academic subject that they teach. A teacher’s options for demonstrating subject matter competency vary according to whether the teacher is new and the grade level being taught. New elementary school teachers must demonstrate subject matter competency by passing a rigorous state exam in the basic elementary school curriculum; new middle or high school teachers may establish that they are highly qualified by either taking a rigorous state exam or successfully completing a degree (or equivalent credentialing) in each core academic subject taught. In addition, NCLBA allows current teachers to demonstrate subject matter competency based on a “high objective uniform state standard of evaluation.” For example, under these uniform state standards, a combination of experience, expertise, and professional training could be used to meet the NCLBA subject matter competency requirements. Education has issued guidance to states on how to apply NCLBA requirements to all teachers, including special education teachers. According to Education’s January 2004 guidance, special education teachers who provide instruction in core academic subjects, such as teachers in self-contained classrooms, are required to comply with the NCLBA subject matter competency requirements. In contrast, those special educators who do not provide instruction in core academic subjects, such as those who provide consultative services to highly qualified general educators, do not have to comply with the NCLBA teacher requirements. In addition, Education’s March 2004 guidance provided additional flexibility on the implementation deadline and competency requirements for some special education teachers. Specifically, the guidance stated that educators in eligible rural areas who are highly qualified in at least one core academic subject they teach would have 3 additional years to demonstrate subject matter competency in other academic areas. The guidance also states that teachers who provide instruction in multiple core academic subjects will be able to demonstrate their subject matter competency through one process under their states’ uniform standards, such as taking a single test that covers multiple core academic subjects. IDEA is the primary federal law that addresses the unique needs of children with disabilities, including, among others, children with specific learning disabilities, speech and language impairments, mental retardation, and serious emotional disturbance. The law mandates that a free appropriate public education be made available for all eligible children with disabilities, ensures due process rights, requires an individualized education program (IEP) for each student, requires the inclusion of students with disabilities in state and district wide assessment programs, and requires the placement of students in the least restrictive environment. Under IDEA, states are required to establish special education teacher requirements that are based on the highest requirements in the state for personnel serving children and youth with disabilities. Congress is considering including new special education teacher qualifications in the reauthorized IDEA. Under H.R. 1350, a new definition of “highly qualified,” as it refers to teachers, would be added with the same meaning as in NCLBA. In contrast, S. 1248 would add an extensive definition of “highly qualified” with respect to the qualification of educational personnel, while taking into account differences between special education and general education teachers. For example, under S. 1248, special education teachers who consult with secondary school core academic subject teachers for children with disabilities would need to be fully certified in special education and demonstrate the knowledge and skills necessary to teach students with disabilities, to be highly qualified. In addition, S. 1248 proposes to extend the deadline for meeting the highly qualified teacher requirements by 1 year—to school year 2006- 2007. Two offices within the Department of Education are responsible for addressing special education teacher qualifications: the Office of Elementary and Secondary Education and the Office of Special Education Programs. The enactment of NCLBA significantly changed the expectations for all teachers, including those instructing students with disabilities. For example, states are now required to report on the qualifications of their teachers and the progress of their students. OESE has assumed responsibility for developing policies for improving the achievement of all students and the qualifications of teachers. In addition, the office provides technical and financial assistance to states and localities, in part so they can help teachers meet the new qualification requirements. For example, in fiscal year 2003, OESE provided funding to state and local education agencies through its Improving Teacher Quality state grant program. OSEP is responsible for providing leadership and financial resources to help states and localities implement IDEA for students with disabilities and their teachers. These responsibilities include awarding discretionary grants and contracts for projects designed to improve service provision to children with disabilities. In 2003, OSEP provided funding to 30 states through the State Improvement Grants program. OSEP also supports research on special education through centers such as the Center on Personnel Studies in Special Education. In the 2002-2003 school year, all states required that special education teachers have a bachelor’s degree and be certified to teach—two of the three NCLBA teacher qualification requirements—and half required special education teachers to demonstrate competency in core academic subjects, which is the third requirement. In the 26 states that did not require teachers to demonstrate subject matter competency, state-certified special education teachers who were assigned to instruct core academic subjects might not be positioned to meet the NCLBA requirements. In 31 states that offered alternative routes to teacher certification, certification requirements for alternative route and traditional teacher preparation program graduates followed a similar pattern, with half meeting two of three NCLBA teacher requirements. Every state required special education teachers to hold at least a bachelor’s degree and to be certified by their states before teaching, according to our survey results and reviews of Education documents and state Web sites. States varied in whether they offered one or more types of teaching certificates for special educators. Specifically, 30 states established a single certification for special education teachers that covered kindergarten through 12th grade, according to survey respondents. The remaining 22 states offered two or more certifications. For example, some states offered different certifications for teachers of elementary, middle school, and high school students. In addition, some states certified special education teachers to serve students with specific disability categories such as hearing impaired and emotionally disturbed, and/or with broader disability categories, such as mild, moderate, and severe special needs. Finally, several states certified their special education teachers for specific instructional roles such as general special education teacher, resource room teacher, or collaborative teacher. During the 2002-2003 school year, 24 states, the District of Columbia, and Puerto Rico required special education teachers to demonstrate some level of competency in the core academic subjects that they wished to teach at the time of their initial certification by having a degree or passing tests in the academic subjects that they wished to teach. Teachers in these states are better positioned to meet NCLBA’s teacher requirements. However, the level of competency required varied by state and in some cases may not meet NCLBA competency level requirements. The rest of the states did not have any such requirements. (See fig. 1.) In states that did not have these requirements, the certified special education teachers who were assigned to instruct core academic subjects might not be positioned to meet the NCLBA requirements. To meet NCLBA teacher requirements, these teachers would need to demonstrate subject matter competency by the end of the 2005-2006 school year. The extent to which special education teachers were required to meet NCLBA subject matter competency requirements depended upon their instructional roles, which could sometimes be difficult for prospective teachers to determine. Special education teachers often attained their certification prior to being hired by local school districts for specific grade levels, subjects, or instructional roles. Therefore, these individuals might not be positioned to meet NCLBA teacher requirements for their future instructional roles. Furthermore, any special education teacher who was assigned to teach a different subject from one year to the next might meet subject matter competency requirements one year but not the next. According to Education officials, these challenges are not specific to special education teachers and will require school districts to be more mindful of teacher qualifications, including subject matter mastery, when assigning teachers to various teaching roles. According to survey respondents, 31 states provided alternative routes to certification for prospective special education teachers. States have developed such routes to meet specific teacher shortages as well as to allow professionals in related fields to become teachers. The alternative routes to certification programs that we reviewed were generally administered by the state education agencies, often through institutions of higher education. However, this was not always the case: In Maryland, for example, one county contracted with Sylvan Learning Center and the New Teacher Project to provide its alternative route to certification program. Most of the states that provided alternative routes to certification required that the graduates from such alternative route to certification programs fulfill the same certification requirements as graduates from traditional special education teacher preparation programs, such as having a bachelor’s degree and passing teacher licensing examinations. The primary difference between alternative route programs and traditional teacher preparation programs was the extent to which teaching candidates received practical teaching experience prior to attaining full state certification. In general, prospective teachers in alternative route to certification programs were required to receive more practical teaching experience before being certified than were teachers in traditional programs. For example, candidates in an alternative route to certification program in Illinois were required to complete a 1-year mentored teaching internship, while most traditional certification programs for special education teachers required teaching candidates to complete a 9- to 18-week supervised student teaching assignment. This additional teaching experience has been required because individuals in some alternative programs have not received courses in pedagogy and instructional techniques. (See app. I for state special education alternative route to certification program contact information.) State officials indicated that implementing the core academic subject competency requirements of NCLBA would be difficult and cited factors that have facilitated or impeded application of this requirement to special education teachers. State officials identified several key facilitators, including having funds available to dedicate to special educators’ professional development and having preexisting or ongoing efforts to develop subject matter competency standards for special educators. State officials and national education organizations’ representatives also cited several factors that impeded meeting the subject competency requirements, including uncertainty about how to apply the law to special education teachers in some circumstances, and the need for additional assistance from Education in identifying implementation strategies. Survey respondents, as well as state officials and national education organizations’ representatives we interviewed, reported that the availability of professional development funding and the flexibility to use funds were essential in helping teachers meet the NCLBA subject matter competency requirement. For example, officials in 19 states reported helping special education teachers by allocating some of the states’ professional development money to financial aid for those seeking to enhance their knowledge in a core academic subject, such as by pursuing a degree. In addition, states can use their professional development funds to create alternative routes to certification. This could result in developing a cadre of special educators who would already have expertise in a core academic subject area. Survey respondents described several state assistance initiatives that were designed to help special education teachers meet the subject matter competency requirements. For example, 17 survey respondents reported holding workshops for special education teachers on specific academic subjects, and a few states held review sessions to prepare teachers for states’ academic content exams. In addition, respondents from 7 states reported providing sample test questions to help teachers prepare for subject matter competency tests. Nineteen survey respondents reported that their states had established partnerships with institutions of higher education to develop and implement strategies to assist special education teachers. For example, Arkansas collaborated with state colleges and universities to develop dual-certification programs for special educators. Officials we interviewed from 2 of 6 states said that they expected their uniform state standards of evaluation would make it easier for their experienced teachers to meet NCLBA subject matter requirements. Specifically, they asserted that these competency standards would allow states and territories to design alternative methods for evaluating teachers’ knowledge of the subject matter they teach, other than having a degree or passing subject matter tests in a core academic subject. According to officials in 2 of the 6 states we interviewed, their alternative methods of evaluating teachers’ subject matter competency would take into account both a teacher’s years of experience and factors such as participation in professional development courses. A few state officials and national education organizations’ representatives we spoke to commented that the flexibility to design alternative methods for evaluating teachers’ subject matter knowledge provided more options for making subject matter competency assessments of experienced special education teachers. State officials we interviewed and surveyed reported being concerned about how difficult meeting the subject matter competency requirements might be for special educators providing instruction, given that their roles may require them to teach at multiple grade levels or multiple subjects. State officials told us that because of special educator shortages, special education teachers’ instructional roles might vary. For example, some special educators might not have to meet subject matter competency requirements when they were hired, but subsequently might have to meet subject matter competency requirements for one or more core academic subjects, depending upon their instructional roles. Education has issued guidance that says that teachers instructing core academic subjects must demonstrate subject matter competency. This guidance applies to all teachers, including special education teachers. However, Education officials told us that the assessment level of the student being taught was a consideration in determining the application of the NCLBA subject matter competency requirement. The inclusion of the assessment levels in determining how to apply the NCLBA requirements may explain some of state officials’ uncertainty regarding the application of the requirement to special education teachers. About half of the state officials and national education organizations’ representatives we interviewed reported that states needed more assistance on how to implement NCLBA teacher requirements for their special education teachers. For example, some state officials from Oklahoma and South Dakota reported being uncertain how to apply the requirements to the unique situations in which special education teachers provide instruction. Officials in these states reported that they were unclear whether a teacher providing instruction in core academic subjects to high school age students who are performing at the elementary level would need to meet elementary or high school level subject competency requirements (See table 1 for examples of the application of NCLBA requirements to special educators’ instructional roles). Officials from half the states we surveyed indicated that they did not believe the law provided enough flexibility for teachers to meet the subject competency requirements. A few state officials we interviewed, particularly those with a large percentage of rural districts, such as those in South Dakota and Arkansas, mentioned this perceived lack of flexibility as a key concern. In particular, these officials indicated that because their special education teachers often teach multiple subjects, they would have to attain multiple degrees or pass several subject matter tests to meet the subject matter competency requirement. Recent Education guidance issued after this survey was concluded gives states more time to help all teachers, including special education teachers who teach core academic subjects, in small, rural school districts, meet the requirements. Under this new guidance, teachers in eligible rural school districts, who are highly qualified in at least one subject, will have 3 years to become highly qualified in the additional subjects they teach. State officials reported concerns about their states’ ability to meet the federal timelines for implementing the NCLBA teacher requirements for special education teachers. Officials from 32 states reported that the time frames were not feasible for implementing the requirements. This included 15 states that had established subject matter competency requirements for their special education certification. However, depending on the specific state certification requirements, teachers in these states may still be required to do additional work to meet the subject matter competency requirements of NCLBA. In addition, some state officials reported that their states were not positioned to meet federal deadlines because some institutions of higher education had not aligned their programs with NCLBA requirements. For example, officials in 31 states reported that that current special education teacher preparation programs hindered implementation of NCLBA requirements, primarily because these programs did not emphasize majors or concentrations in core academic subjects. Given these conditions, state officials, in 3 of the 6 states we visited, reported the need for additional assistance in identifying strategies to meet the timelines for meeting requirements. Education also noted that the challenge facing states is developing new mechanisms to make sure that all teachers of core academic subjects are able to demonstrate appropriate subject matter mastery. Some state officials and national education organizations’ leaders also cited concerns that special education teachers currently teaching might leave the field rather than take exams or return to school to take the courses needed to demonstrate subject matter competency. Thirty-two survey respondents expressed concern that the potential flight of special education teachers would hinder efforts to implement the requirements. Finally, state education officials reported uncertainty over how to reconcile requirements of the two laws that appear to be inconsistent and thus could impede implementation of NCLBA. These officials reported that they were unsure as to which act—IDEA or NCLBA—should take precedence in establishing personnel requirements for special education teachers. For example, under IDEA, a student’s IEP could require that he be taught mathematics at a functional level 3 years below his chronological age, and under IDEA a certified special education teacher would be qualified to provide this instruction. However, under NCLBA, a teacher might not be qualified to instruct this student without first demonstrating subject matter competency in mathematics. According to Education officials, the requirements would depend in part on the assessment level of the students being taught. At the same time, Education officials noted that NCLBA teacher requirements apply to all teachers, including special education teachers. As a result of this uncertainty, some of the state special education officials we interviewed and surveyed said that they had decided to wait for further guidance or assistance before beginning to implement any NCLBA requirements for special education teachers. Education officials reported that they were aware that some states had expressed uncertainty about how to implement NCLBA’s teacher requirements. Moreover, Education officials noted that states that wait for further guidance could hinder their special education teachers’ ability to meet the subject matter competency requirements by the end of the 2005-2006 school year. Education has provided a range of assistance, such as site visits, Web- based guidance, and financial assistance, to help states implement the highly qualified teacher requirements. However, department coordination related to the implementation of NCLBA’s teacher requirements for special education teachers has been limited. OESE has taken the lead in providing this guidance, with support from offices such as the Office of General Counsel and the Office of the Secretary. OSEP played a limited role in these efforts. Further, departmental coordination among Education’s offices was limited with respect to OSEP’s involvement in other key teacher quality initiatives. Because of this, Education may not have been in a position to be fully apprised of how special education concerns could affect implementation of the NCLBA teacher requirements. However, Education officials told us that they included OSEP by contacting OSEP staff to clarify IDEA substantive issues. Further, Education officials told us they have recently added OSEP to the department’s teacher quality policy team. However, Education currently does not have plans to develop written policies and procedures for coordination among its offices. According to Education officials, OESE took the lead in providing assistance to states concerning the NCLBA teacher requirements, with some support provided by offices including OSEP, the Office of the Secretary, the Office of the Undersecretary, the Assistant Secretary of Elementary and Secondary Education, and the Office of General Counsel. One of OESE’s key efforts to provide technical assistance to states was the Teacher Assistance Corps initiative, which sent teams of experts to states to provide clarification and guidance on implementing NCLBA teacher requirements. According to Education, these teams have been responsible for sharing promising strategies, providing advice on compliance issues, and assisting state officials in setting and meeting teacher quality goals. The teams have also gathered feedback from states on their concerns about implementing the teacher requirements. Team members have included lead officials from OESE and general counsel, individuals with expertise on issues of concern to particular states, higher education representatives, and education officials from that state. Education officials told us that OSEP staff did not participate in these visits, but two state officials with expertise in special education participated in some visits. OESE also offered states other types of assistance. OESE created a teacher quality newsletter, and the Office of the Under Secretary created and then updated the No Child Left Behind Toolkit for Teachers booklet, to help teachers understand the law in general, the highly qualified teacher requirements, and to explain which teachers need to meet the NCLBA requirements. However, while the tool kit provided detailed information pertaining to general education teachers, it provided limited information for special education teachers. According to OESE officials, the office had also been developing a Web site on promising practices for implementing the NCLBA teacher quality requirements and had plans to feature special education on the site. However, at the time of our interviews, OESE did not have a timeline for when this Web site would be available. Finally, OESE also provided financial assistance to states through Improving Teacher Quality state grants; states could use this financial assistance to help special education teachers meet NCLBA teacher requirements. The enactment of NCLBA significantly changed the expectations for all students and their teachers in the nation’s schools and increased the need for OESE and OSEP to coordinate their efforts. NCLBA covers to a greater extent than did previous educational legislation the groups that have historically been the primary responsibility of OSEP—students with disabilities and their teachers. Moreover, NCLBA established qualifications for all teachers, including special education teachers, who provide instruction in core academic subjects such as English, language arts, mathematics, and science. As state education officials began implementing NCLBA subject matter competency requirements, they sought guidance from OSEP, their primary source of information on special education issues. However, OSEP officials told us that they had generally referred these officials to OESE or to the NCLBA Web site. OSEP officials told us that they were waiting until IDEA is reauthorized to develop their own guidance on special education teacher quality requirements. However, during this time NCLBA requirements applied to special educators teaching core academic subjects, and several state officials told us they needed clarification of the guidance on these requirements. Coordination between OSEP and OESE has generally been limited. For example, OSEP commented on the teacher quality policies and initiatives that OESE developed, but generally was not involved in the initial development of these policies. Education officials told us that OSEP was included in the implementation of the teacher requirements, noting that they contacted this office to clarify IDEA substantive issues and that OSEP officials reviewed NCLBA guidance. OSEP did not participate in OESE’s Teacher Assistance Corps visits to states and generally was not involved in the analysis of the information that was collected from these visits. OESE officials told us that they did not believe that states would benefit from OSEP’s participation in these visits, because the focus of the visits was on meeting the NCLBA requirements, not IDEA requirements. In addition, Education told us that there were no written policies or procedures to assist OESE and OSEP in coordinating the development and implementation of its teacher quality policies for special education teachers. Finally, these officials did not indicate that Education was planning to develop such policies. In March 2003, Education formed a teacher quality policy team under the auspices of the Office of the Under Secretary and included other key offices in Education such as the Office of the Secretary, the Office of General Counsel, and OESE. This team, run by OESE, has focused on NCLBA implementation related to teacher qualifications, and special education teacher issues have been among the topics most frequently discussed. OSEP was not a member of this team until April 2004, when Education officials told us that OSEP had become a part of the team. NCLBA is a complex law with new requirements that hold states, districts, and schools accountable for ensuring that their teachers meet specific qualifications. Further, the law applies to all teachers, including special education teachers, resulting in states and districts having to reassess how they certify and assign special education teachers, as well as provide professional development geared toward helping teachers meet requirements. State officials reported the need for assistance on how to meet NCLBA requirements, with Education also noting the need for states to have more information on strategies to meet requirements. Because half of the states do not have subject matter competency requirements as part of special education certification, these states in particular are challenged with developing strategies to help their teachers meet NCLBA requirements. Without additional assistance on such strategies, special education teachers may not be positioned to meet requirements by the end of 2005- 2006 school year. In addition, several state education officials cited the need for additional clarification on the application of the NCLBA subject matter competency requirement to special education teachers in special circumstances, for example those providing instruction to high school age students who are performing at the elementary level. Without additional assistance from Education to resolve state concerns related to special education teacher qualification issues, some states might not be able to determine how to focus their resources to ensure that their teachers meet the act’s requirements. NCLBA covers to a greater extent than did previous elementary and secondary education acts the groups that have historically been the primary responsibility of OSEP—students with disabilities and their teachers. OESE has assumed primary responsibility for implementing NCLBA, including provisions applying to special education teachers. OESE has generally not relied on OSEP staff or information produced by OSEP to develop policy or guidance. Consequently, OESE may not have fully benefited from OSEP’s expertise to inform its NCLBA discussions on policies and guidance related to special education teacher issues and requirements. Although Education has recently added OSEP to its NCLBA teacher quality policy team, overall NCLBA coordination efforts among Education offices have not been formalized in writing to ensure appropriate and continuing involvement of these offices. As a result, the department may not fully address states’ needs for information and assistance on the implementation of NCLBA requirements for special education teachers. To better address states’ concerns about their special education teachers being positioned to meet NCLBA teacher requirements, we recommend that the Secretary of Education provide additional assistance to states on strategies to meet the requirements and clarification of subject matter competency requirements for special education teachers. To continue to improve policy development and technical assistance that Education’s offices provide to states on NCLBA requirements, we recommend that Education formalize in writing coordination efforts between OESE and OSEP. For example, such efforts could include defining how OSEP’s expertise and staff would be involved in developing NCLBA policies and guidance related to special education teachers and in providing technical assistance to states. We provided a draft of this report to Education for review and comment. In their comments, Education officials noted that they believed their guidance was clear but recognized that states were still struggling to identify strategies to meet requirements. Education officials provided new information in their comments on the draft that indicated improved coordination among those Education offices that are involved in NCLBA policy development and guidance. Consequently, we modified the report on both these topics to reflect Educations’ comments. Education officials also provided technical comments that we incorporated into the report where appropriate. Education’s comments are reproduced in appendix II. Given the difficulties states are experiencing in implementing the law and the level of uncertainty reported by state officials, we believe that additional assistance needs to be provided by Education to help states implement the requirements. In Education’s comments, the department noted that states were having difficulty implementing NCLBA teacher requirements. Education officials highlighted assistance they provided and their willingness to provide additional technical assistance, depending on what states need. We believe Education could help states by identifying strategies to help states meet requirements, especially those states without subject matter competency requirements for their special education teachers. In addition, Education noted in its comments that guidance on how to apply the NCLBA subject matter competency requirement for special education teachers instructing high school age students functioning at elementary school levels was not different from guidance for all teachers. However, Education officials have also said that the assessment level of a student could be considered in determining how to apply the NCLBA teacher requirements. We encourage Education to provide assistance to explain the requirements, particularly as they relate to unusual circumstances involving varying student assessment levels. We have modified the report to reflect Education’s comments. We continue to believe that improved coordination is needed. However, we modified the report to reflect Education’s recent addition of OSEP to its teacher quality policy team. We acknowledge Education’s effort in this regard and encourage the department to formalize its coordination policies by putting them in writing. We believe that formalizing coordination efforts will ensure that the different offices continue to be involved in developing NCLBA policies and guidance related to special education teachers. Copies of this report are being sent to the Secretary of Education, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other contacts and major contributors are listed in appendix III. In addition to those named above, Emily Leventhal, Benjamin Howe, Ron La Due Lake, Luann Moy, Jean McSween, Bob DeRoy, Bryon Gordon, Behn Kelly, and Amy Buck made key contributions to the report. | During the 2001-2002 school year, more than 400,000 special education teachers provided instructional services to approximately 6 million students with disabilities in U.S. schools. Two federal laws contain teacher qualification requirements that apply to special education teachers: the No Child Left Behind Act (NCLBA) and the Individuals with Disabilities Education Act (IDEA). Given the committee's interest in issues related to highly qualified special education teachers, we are providing information about (1) the state certification requirements, including the use of alternative certification programs, for special education teachers, and how they relate to NCLBA requirements; (2) the factors that facilitate or impede state efforts to ensure that special education teachers meet NCLBA requirements; and (3) how different offices in the Department of Education (Education) assist states in addressing NCLBA teacher requirements. In the 2002-2003 school year, all states, the District of Columbia, and Puerto Rico required that special education teachers have a bachelor's degree and be certified to teach--two of NCLBA's teacher qualification requirements--and half required special education teachers to demonstrate subject matter competency in core academic subjects, which is the third requirement. Specifically, 24 states, the District of Columbia, and Puerto Rico required their teachers to demonstrate some level of subject matter competency by having a degree or passing state tests in the core academic subjects that they wished to teach. Teachers of core academic subjects in the remaining states that did not have such requirements might not be positioned to meet the NCLBA requirements. To meet NCLBA teacher requirements, teachers would need to demonstrate competency in core academic subjects by the end of the 2005-2006 school year. State education officials reported that the availability of funds to support professional development facilitated implementation of the NCLBA teacher requirements, while other factors, such as uncertainty about how to apply the subject matter competency requirement to special education teachers, impeded implementation. State education officials and national education organizations' representatives we interviewed cited the need for more assistance from Education in explaining NCLBA's teacher requirements and identifying implementation strategies. Education has provided a range of assistance, such as site visits, Web-based guidance, and financial assistance, to help states implement the highly qualified teacher requirements. However, department coordination related to the implementation of NCLBA's teacher requirements for special education teachers has been limited. |
Federal agencies’ management responsibilities for their financial statements include, among other things, preparing the financial statements in conformity with GAAP and establishing and maintaining internal controls over financial reporting. Auditors of these financial statements are required to plan and perform their audits to obtain reasonable assurance about whether the financial statements are free of material misstatement. While restatements to previously issued financial statements can happen and may not be surprising given weaknesses in the financial reporting environment at many federal agencies, inherently, restatements raise questions about the reliability of other information in previously issued financial statements. In addition, frequent restatements to correct misstatements can undermine public trust and confidence in both the entity and all responsible parties. Adequate transparency and timely notification of restatements are essential to help preclude users of agencies’ financial statements and the related audit reports from inadvertently relying on inaccurate information and allow them to make more informed and relevant decisions. According to SFFAC No. 1, the primary intended users of federal agencies’ financial reports are citizens, the Congress, federal executives, and federal program managers. Each of these groups may use federal agencies’ financial statements to satisfy their specific needs. Citizens are interested in many aspects of the federal government, especially those federal programs that affect their well-being. The Congress uses the agencies’ financial statements to monitor and evaluate the efficiency and effectiveness of federal programs. Federal executives, such as central agency officials at OMB and the Department of the Treasury (Treasury), use the federal agencies’ financial statements to oversee government spending. Specifically, OMB assists the President in overseeing the preparation of the federal budget by formulating the President’s spending plans, evaluating the effectiveness of agency programs, assessing competing funding demands among agencies, and setting funding priorities. Treasury assists the President in managing the finances of the federal government and prepares the CFS, which is based on audited financial statements prepared by federal agencies. GAO uses the agencies’ financial statements and the work of their respective auditors during its annual audit of the CFS. Federal program managers also use agencies’ financial statements as a tool for managing their respective agencies’ operations within the limits of the spending authority granted by the Congress. The objectives of our review were to determine the transparency and timeliness of the restatement disclosures by the nine CFO Act agencies’ management and their respective auditors. For the nine agencies we reviewed, we interviewed the preparers and auditors of the agencies’ fiscal year 2003 financial statements, including staff from the agencies’ Offices of Inspector General (OIG), and we obtained and reviewed relevant audit documentation. Because the OIGs typically contracted with various independent public accountants (IPA) to audit the agencies’ financial statements, we expanded our contacts to include such IPAs. Our work was not designed to and we did not test the accuracy or appropriateness of the restatements. In addition, our review did not include restatements reported in fiscal year 2005 financial statements since such financial statements were issued during November 2005, one month after the completion of our fieldwork. With respect to the two key areas, we reviewed the nine agencies’ fiscal years 2004 and 2003 comparative financial statements and the related audit reports to determine, among other things, whether the appropriate columns of the agencies’ restated financial statements were labeled “Restated”; fiscal year 2003 ending balance agreed with the fiscal year 2004 beginning balance on the agencies’ Statement of Changes in Net Position, if restated; agencies’ restatement footnotes were properly labeled; agencies asserted in their MD&A that they had received a consecutive number of clean audit opinions, and if so, whether they disclosed that certain of their previously issued financial statements were subsequently restated to correct for a material misstatement; audit reports referred the reader to the agencies’ restatement footnote; agencies timely notified their auditors and users of their financial statements of the material misstatement and plans for correcting the misstatement in the financial statements; and auditors were aware of a material misstatement to previously issued financial statements prior to the beginning of the fourth quarter of the following fiscal year and whether the amount and effect were known, and if so, did the auditors advise the agencies’ management to reissue the financial statements. For this capping report, which is based on our review of the nine federal agencies that reported restatements in fiscal year 2004 financial statements, we considered certain accounting and auditing standards that were applicable to fiscal year 2004 federal financial reporting as well as accounting standards that were issued subsequent to fiscal year 2004. These standards consist of the Federal Accounting Standards Advisory Board’s (FASAB) Statement of Federal Financial Accounting Standards (SFFAS) No. 15, Management’s Discussions and Analysis; SFFAS No. 21, Reporting Corrections of Errors and Changes in Accounting Principles; FAS No. 16, Prior Period Adjustments; FAS No. 154, Accounting Changes and Error Corrections; and the AICPA’s Codification of Auditing Standards, AU section 110, Responsibilities and Functions of the Independent Auditor, AU section 420, Consistency of Application of Generally Accepted Accounting Principles, AU section 508, Reports on Audited Financial Statements, and AU section 561, Subsequent Discovery of Facts Existing at the Date of the Auditor’s Report. We also considered the following OMB guidance: OMB Bulletins No. 06-03 and No. 01-02, Audit Requirements for Federal Financial Statements; OMB Bulletin No. 01-09, Form and Content of Agency Financial Statements; and OMB Circular No. A-136, Financial Reporting Requirements. We performed our detailed review and analysis of the fiscal year 2003 restatements reported in agencies’ fiscal year 2004 financial statements from December 2004 to October 2005. Between September 2005 and January 2006, we issued reports to five of the nine CFO Act agencies that had received unqualified audit opinions on, but subsequently restated in fiscal year 2004, their originally issued fiscal year 2003 financial statements. In conjunction with our fiscal year 2005 CFS audit, we identified continued restatements of previously issued agency financial statements and the need for additional guidance to agencies and their auditors governmentwide. Our work was performed in accordance with GAGAS. We requested comments on a draft of this report from the Director of OMB or his designee. OMB provided oral comments, which are discussed in the Agency Comments and Our Evaluation section of this report. During our review of the nine CFO Act agencies’ restatements reported in fiscal year 2004, we identified issues with the disclosures made by those agencies and their respective auditors regarding the restatements. The primary contributing factor for these disclosure issues was insufficient guidance available at the time to both the agencies’ management and their auditors for disclosing the restatements. Although the available guidance did not provide explicit details for disclosing restatements, we believe that information regarding restatements should be disclosed in a transparent and timely manner consistent with the qualitative characteristics of information in financial reports described in SFFAC No. 1. In our view, more detailed accounting and auditing guidance on how to satisfy the financial reporting characteristics in SFFAC No. 1 as it relates to the disclosure of restatements would have been helpful. Regardless, as discussed later in the report, several agencies included information in their restatement disclosures that improved the transparency of the restatement. Given the issues we identified in our review of restatements reported in fiscal year 2004 financial statements, we believe it would be appropriate to offer more explicit or detailed guidance for how agency management and their respective auditors should disclose restatements. Specifically, although SFFAS No. 21 required that the nature of an error in previously issued financial statements and the effect of its correction on relevant balances be disclosed, the standard did not provide a detailed explanation of the type of information that should be disclosed or what the nature of an error means. OMB Bulletin No. 01-09, which specifies the form and content for federal financial statements, also did not provide specific guidance on how an agency’s management should disclose restatement information in its financial statements. As for the auditor’s disclosure of the agency’s restatements in its audit report, AU section 561 only stated that the audit report usually should refer to the note to the financial statements that describes the restatement. Thus, if for no other reason than avoiding interpretation issues as to how much disclosure and in what form is appropriate, we believe that guidance to agency auditors should be enhanced to attain some added level of uniform treatment regarding the disclosure of restatements. We identified the following four issues related to the agencies’ reporting of the restatements. While guidance available during fiscal year 2004 did not expressly require agencies to label the columns of restated financial statements as “Restated,” seven of the nine agencies labeled their financial statements as such. Such labeling is a common practice in reporting restated financial statements. Two of the nine agencies did not label their financial statements as “Restated,” and as a result, users of such statements may be unaware that a restatement occurred. OMB Circular No. A-136 was revised during fiscal year 2005 to provide additional guidance for disclosing restatements; however, it does not require agencies to label their financial statements as “Restated.” In our view, revising OMB Circular No. A-136 to require agencies to label the columns of the restated financial statements as “Restated” would make the existence of restated financial statements more evident to the readers of the financial statements. We also found issues regarding certain agencies’ restated Statement of Changes in Net Position. Of the six agencies that restated their originally issued fiscal year 2003 Statements of Changes in Net Position to correct for material misstatements, two of the restatement presentations could be misinterpreted because the fiscal year 2004 beginning balances did not agree with the restated fiscal year 2003 ending balances. Instead of carrying forward the restated fiscal year 2003 ending balance to the fiscal year 2004 beginning balance, these two agencies made prior period adjustments to the fiscal year 2004 beginning balances to reflect the restated fiscal year 2003 ending balances. We believe that a clearer presentation on the agencies’ fiscal years 2004 and 2003 comparative Statement of Changes in Net Position would have been to carry forward the restated fiscal year 2003 ending balances and present them as the fiscal year 2004 beginning balances instead of presenting prior period adjustments in the fiscal year 2004 column. Although authoritative guidance available during fiscal year 2004 did not expressly prohibit agencies from reflecting prior year restatements as adjustments to the current year’s beginning balances on the Statement of Changes in Net Position, we found that the other four agencies’ restated fiscal year 2003 ending balances agreed with the fiscal year 2004 beginning balances on their Statement of Changes in Net Position. The current version of OMB Circular No. A-136 includes guidance from SFFAS No. 21, which states that the adjustment should be made to the beginning balance of cumulative results of operations, in the Statement of Changes in Net Position for the earliest period presented. In our view, OMB Circular No. A-136 would be enhanced if it explicitly stated that the current year unadjusted beginning balances on the Statement of Changes in Net Position are to agree with the restated ending balances on the prior year’s statement (i.e., that adjustments are to be made only to the prior year and carried forward as restated). In our view, all nine of the agencies’ restatement footnotes lacked sufficient clarity or sufficient detail regarding the restatements in at least one of the following two areas: (1) the title of the footnote or (2) the content of the footnote. For five agencies, the title of the restatement footnote did not reflect the existence of a restatement. Specifically, three agencies titled their restatement footnotes as either “Prior Period Adjustments” or “Prior Period Reclassification,” which could be misinterpreted since the changes to the financial statements represented restatements because of material misstatements rather than prior period adjustments or prior period reclassifications. The other two agencies did not include separate footnotes disclosing the restatement information. Instead, one agency provided the restatement information under its “Significant Accounting Policies” and the other included it under its “Statement of Changes in Net Position” and its “Statement of Budgetary Resources” notes. The remaining four agencies appropriately titled their restatement disclosures by entitling their footnote “Restatement.” With respect to restatement footnote content, five clearly explained the misstatement and reason for the restatement while the other four agencies did not. Accordingly, it was not clear if these four agencies’ misstatements were attributed to errors in recognition, measurement, presentation, or disclosure in financial statements resulting from mathematical mistakes, mistakes in the application of GAAP, or oversight or misuse of facts that existed at the time the financial statements were prepared. In addition, one of the nine agencies did not disclose the specific year(s) being restated, while two other agencies did not disclose all of the financial statements impacted by the restatements. Further, we also believe some additional language should be included in related footnotes. Specifically, in our view, a sufficient restatement footnote would also include (1) the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated and the specific financial statement(s) affected and line items restated); (2) the overall impact the restatement has on the current year financial statements (e.g., the change in overall net position, change in the audit opinion); and (3) a discussion of the corrective actions taken by the agency’s management. Although six agencies appropriately disclosed the amounts being restated, the remaining three did not disclose the specific line items restated and the related amounts. In addition, five agencies did not disclose the effect of the restatement on the financial statements as a whole. Further, none of the nine agencies’ restatement footnotes discussed the actions taken by the agency’s management after discovering the misstatement, such as measures taken to better prevent similar misstatements from occurring in the future (e.g., improvements in internal controls). Authoritative guidance available during fiscal year 2004 did not provide explicit guidance to the agencies as to what information should be included in the agencies’ footnotes or how the restatement note should be titled. Revisions made to OMB Circular No. A-136 address a number of these areas. Specifically, OMB Circular No. A-136 now requires agencies to provide restatement information in a separate note entitled “Restatements.” In addition, regarding content of the note, the revised circular calls for the following information to be included in the note: the nature of the error and the reason for the restatement, the year(s) being restated, which financial statements are impacted, the amounts being restated, and the effect of the restatement on the financial statements as a whole (i.e., change in overall net position, change in audit opinion, etc.). Further, per the revised OMB Circular No. A-136, agencies should discuss the actions management took after discovering the error. The additional requirements in OMB Circular No. A-136 address many of our concerns with the transparency of the restatement footnote. We did, though, identify three areas where OMB Circular No. A-136 could further enhance transparency. The first is to clarify that when agencies disclose the amounts being restated, it is important that they also disclose the specific line items restated and the related amounts. In our view, this additional information will allow the readers of the restated financial statements to more clearly see how the restatement affected such statements. The second is to define the meaning of the “nature” of an error. The third is to explicitly state what type of information should be provided when discussing the actions management took after discovering the error. We also found that certain agencies’ presentation of restatements in their MD&A could be misleading. Seven of the nine agencies we reviewed stated in their fiscal year 2004 MD&A that they had achieved a consecutive number of unqualified opinions on their respective financial statements. However, six did not acknowledge that one or more of these financial statements had been restated in the intervening years to correct for material misstatements. We believe stating that there have been consecutive years of unqualified audit opinions without the appropriate context could be misleading to the reader of the financial statements. It erroneously conveys an impression of consistent, accurate financial reporting over a period of time, when in fact this was not the case because subsequently, the financial statements and the opinion were found to be incorrect. According to SFFAS No. 15, Management’s Discussions and Analysis, management should have great discretion regarding what to say in its MD&A. At the same time, the standard also states that the pervasive requirement is that the MD&A not be misleading. In our view, it is misleading for an agency to state in its MD&A that it has received a consecutive number of unqualified opinions on its financial statements when one or more of its financial statements within that time frame were subsequently restated. In our view, agencies having restated their financial statements should either refrain from such claims or clearly disclose in their MD&A which of the agency’s prior year financial statements, as originally issued, were materially misstated and subsequently restated. Although standards do not specifically state that agencies shall disclose restatement information in their MD&A, we found that one of the seven agencies did state that it had received a clean audit opinion for 7 consecutive years but appropriately disclosed that its fiscal year 2003 financial statements were restated to correct misstatements. During our review, we found issues regarding how agency auditors disclosed the agencies’ restatements in their audit reports. According to AU section 561, the restatement footnote in the agency’s financial statements “usually should” be referred to in the audit report, but given the latitude, such disclosure is not an across-the-board requirement. In any report on financial statements, the auditor has the discretion to add a separate paragraph to the audit report to emphasize a matter regarding the financial statements. In our view, such matters include the effect of the material misstatements on previously audited financial statements and the accompanying audit report. Also, we believe that if the agency’s restatement footnote does not provide a clear and adequate description of the restatement, then the auditor should go beyond merely referencing the restatement footnote and add a separate paragraph to the audit report that provides additional details regarding the effects of the restatement and should consider whether it is necessary to modify the audit opinion. In our view, none of the nine agencies’ audit reports we reviewed sufficiently disclosed all the essential information that would clearly explain the restatement. Specifically, we found that seven of the nine audit reports did not provide a statement that the previously issued audit report was withdrawn and replaced by the opinion on the restated financial statements, three of the nine audit reports either did not disclose the restatement or include a reference to the agency restatement footnote in the financial statements, and none of the nine agencies provided a sufficient description of the restatement (i.e., the nature and cause of the misstatement, year(s) being restated, financial statements and line items impacted, specific amount(s) of the material misstatement(s) and the related effects on the previously issued financial statements, and actions management took after discovering the misstatement) in the notes to their financial statements and none of these agencies’ auditors compensated for this by providing such information in their audit reports. In our view, the auditor plays an important role in ensuring proper disclosure of restatements. Accordingly, if any of the prior year financial statements are restated and management did not already provide a sufficient description of the restatement in the note(s) to the financial statements, the audit report should include such information. In addition, although none of the nine agencies’ auditors disclosed misstatements of unknown amounts, we believe that if at the time of issuance of the audit report, a material misstatement or potential material misstatement has been identified in any of the prior years’ financial statements but the specific amount of the misstatement and the related effects of such are not yet known, it is important for the auditor to disclose the situation in its audit report and modify its opinion or disclaim an opinion on the prior year financial statements as appropriate. We also identified issues with the timeliness of management and auditor communication regarding material misstatements affecting certain agencies’ previously issued financial statements. We attributed these issues to a combination of the lack of specific guidance at that time for both agencies management and their respective auditors and the lack of compliance with the related accounting and auditing standards that were in effect during fiscal year 2004. During fiscal year 2004, neither OMB Bulletin No. 01-09 nor SFFAS No. 21, which apply primarily to agency management, provided specific guidance on the timely investigation and reporting of a material misstatement or potential material misstatement in a previously issued financial statement following its discovery. The guidance available to auditors at that time was AU section 561 and OMB Bulletin No. 01-02, which provided guidance to the agencies’ auditors regarding the timely communication of restatements for corrections of misstatements. While this audit guidance conveyed the intent of timely communication, it did not provide much guidance for how the agencies’ management or their respective auditors should timely communicate such restatements. OMB Bulletin No. 01-02 stated that there shall be open and timely communication throughout the audit process between the agencies’ management and their auditors, which includes potential audit findings, materially misstated or unsupported amounts in the financial statements, and material weaknesses in internal control. The bulletin did not provide guidance for what the auditor should communicate to management for when and how an actual or potential restatement should be disclosed to the users of the agency’s financial statements. With respect to the auditor’s responsibilities for timely communication, AU section 561 states that consideration should be given to, among other things, the “time elapsed” since the erroneous financial statements were issued. According to AU section 561, when the auditor has concluded that action should be taken to prevent future reliance on the audit report, the auditor should advise the auditee to make appropriate disclosure of the newly discovered facts and their impact on the financial statements to persons who are known to be currently relying or who are likely to rely on the financial statements and the related auditor’s report. AU section 561 also states that if an auditor determines that issuance of financial statements accompanied by the audit report for a subsequent period is “imminent,” appropriate disclosures can be made in such statements rather than by separately issuing the restated financial statements. However, guidance available during fiscal year 2004, AU section 561 and OMB Bulletin No. 01-02, did not define “time elapsed” or “imminent.” In addition, existing standards and guidance do not provide sufficient explicit or detailed guidance to management and auditors regarding ensuring the timely disclosure of material misstatements affecting previously issued financial statements. In our view, none of the agencies timely communicated that a potential material misstatement had been identified to either their auditor or to the users of the financial statements. Agency management is responsible for reporting key information in a timely manner, including timely notification of known material or potential material misstatements in previously issued financial statements. During fiscal year 2004, OMB Bulletin No. 01-02 called for communication between the agencies’ management and their auditors, but it did not provide details for disclosing restatements to users of agency financial statements. In addition, as noted above, neither OMB Bulletin No. 01-09 nor SFFAS No. 21 provided specific guidance on the timely investigation and reporting of a material misstatement or potential material misstatement in a previously issued financial statement following its discovery. Our review of the nine CFO Act agencies that restated certain of their fiscal year 2003 financial statements found that three of these agencies identified potential material misstatements prior to the beginning of the fourth quarter of fiscal year 2004, and in our view, did not timely communicate that a potential misstatement had been identified either to their auditors or to the users of their financial statements. The remaining six agencies identified potential material misstatements after the third quarter of fiscal year 2004 but before that year’s comparative financial statements were issued. These six agencies, in our view, also did not timely communicate the potential material misstatements to the users of their financial statements since they did not notify the users of the potential material misstatement prior to the issuance of the restated fiscal year 2003 financial statements during fiscal year 2004. We believe that the current version of OMB Circular No. A-136, if properly implemented, should address many of our concerns regarding agencies’ timely communication of restatements. For example, according to this version of OMB Circular No. A-136, “management shall assume responsibility for any false or misleading information in the financial statements, or omissions that render information made in the financial statements misleading. As such, as soon as possible after errors are detected, management shall notify their auditors and inform their primary users of their financial statements of the error and plans for correcting it in the financial statements … it is imperative that management work with their auditor as soon as the error is detected to assist the auditor in any actions that need to be taken.” These are important advances. We do, though, have some remaining concerns regarding the adequacy of guidance to agencies relating to the timely communication of material misstatements in previously issued financial statements. Specifically, we believe it is important for subsequently discovered material misstatements and potential material misstatements be disclosed to OMB in the agencies’ quarterly financial statements for the reporting period in which the misstatements were discovered. In our view, agencies need to report on the misstatement within a reasonable time period following the discovery of the misstatement. In particular, if the specific amount of a material misstatement and the related effect of such on the previously issued financial statements are known and the issuance of the subsequent period audited financial statements is not imminent, then we believe it is important that the agencies promptly (1) reissue the most recently issued fiscal year financial statements before issuing the current year’s financial statements and (2) communicate the reissuance (a) in writing to the Congress, OMB, Treasury, and GAO; and (b) to the public on the Internet pages where the agencies’ audited financial statements that were affected by the material misstatements were published. If a material misstatement is identified when issuance of the subsequent period audited financial statements is imminent, we believe it is important that the agencies (1) issue restated financial statements as part of the current year’s comparative financial statements and (2) communicate the restatement (a) in writing to the Congress, OMB, Treasury, and GAO; and (b) to the public on the same Internet page where the agencies’ audited financial statements that were affected by the material misstatements were published. Further, in our view, if at any time an agency identifies a material misstatement or potential material misstatement and the effects of it on the financial statements are not known or cannot be determined without a prolonged investigation, the agencies should timely notify persons that are known to be relying or who are likely to rely on the previously issued financial statements and the related audit report that the (1) previously issued financial statements will or may be restated and therefore, (2) the related audit report is no longer reliable. It is important that the agencies include the Congress, OMB, Treasury, and GAO in any such notification and notify the public by posting such notification on the same Internet pages where the agencies’ previously issued financial statements that were affected by the material or potential material misstatement were published. An example of how to appropriately convey this type of information is State’s communication and disclosure of a potential material misstatement in its fiscal year 2004 financial statements. Specifically, State identified a potential material misstatement during the fourth quarter of fiscal year 2005 and, in our view, went beyond the then existing guidance and appropriately disclosed the problem. Guidance available to agency management at that time, SFFAS No. 21 and OMB Bulletin No. 01-09, did not provide explicit guidance for timely communication of a restatement to users of financial statements. State also complied with the revised OMB Circular No. A-136, issued August 23, 2005, even though it was not effective until the end of fiscal year 2005. Specifically, during September 2005, State’s CFO timely notified external parties, including GAO, to which State’s fiscal year 2004 comparative financial statements were directly distributed, not to rely on its fiscal year 2004 comparative financial statements and the related audit report. The notification also stated that State was committed to resolving this issue as quickly as possible. State also notified the public of this issue in the form of a cautionary note on the same Internet pages where the agency’s audited financial statements that were affected by the material misstatement were published. State’s cautionary note included a statement that such actions were necessary because State recently became aware of a potential material misstatement affecting the previously issued financial statements and the related audit report. State noted that due to the need for a complete and thorough analysis, the complexity of the matters involved, and the accelerated financial reporting requirements, State was unable to satisfy the independent auditors by November 15 as to the amount of the potential material misstatement. As a result, the independent auditors issued a qualified opinion on the fiscal year 2005 and 2004 financial statements. State’s CFO also sent a subsequent letter to GAO on December 23, 2005, to inform us that the independent auditors had satisfied themselves about the amounts presented and the auditor updated its opinion on the fiscal year 2005 and 2004 financial statements from a qualified to an unqualified opinion. State was fully transparent in its reports of the restatement in all respects and showed how disclosing the restatement should be done. As such, in our view, State’s actions serve as a model for full and timely notification of a potential material misstatement found in a previously issued financial statement when identified after the third quarter of the current fiscal year and before the current year’s comparative financial statements are issued. Auditors play a critical role in helping to ensure that users of financial statements are timely notified of material misstatements affecting previously issued financial statements and restatement of such financial statements. AU section 561, guidance available during fiscal year 2004, requires that auditors consider the “time elapsed” since the financial statements were issued when a material misstatement is discovered. According to AU section 561, if an auditor determines that issuance of financial statements accompanied by the audit report for a subsequent period is “imminent,” appropriate disclosures can be made in such statements rather than in reissued statements. However, neither “time elapsed” nor “imminent” is defined in AU section 561. We found that at least one of the auditors of the three agencies that had discovered misstatements prior to the fourth quarter of the current fiscal year did not advise its respective agency to make such a disclosure because (1) in May 2004 the auditor did not think that there were any users who would still be relying on the fiscal year 2003 financial statements and the related audit report and (2) the auditor considered issuance of the fiscal years 2004 and 2003 comparative financial statements to be imminent. We have concerns that, without notification, anyone who may have been relying on the fiscal year 2003 financial statements would not have known for more than 5 months that the agency’s originally issued financial statements, which received an unqualified opinion, were materially misstated and should not be relied on. In our view, existing auditing standards and guidance, including OMB Bulletin No. 06-03, while conveying the need for appropriate notifications, do not provide sufficient explicit or detailed guidance to auditors regarding ensuring the timely disclosure of material misstatements affecting previously issued financial statements. Our position is that when a material misstatement or potential material misstatement affecting previously issued financial statements and the related audit report is identified, the auditor has a responsibility to advise the agency’s management to timely notify users such as the Congress, OMB, Treasury, and GAO in writing as well as the public and clearly disclose the situation to them. If the agency’s management does not timely provide adequate disclosure to the relevant users, the auditor has the responsibility to do so. We believe that it is also important that the auditor advise the agency’s management of its responsibility to determine the specific amount of the material misstatement or potential material misstatement and the related effects of such on the previously issued financial statements as soon as reasonably possible. In those cases where the specific amount of the material misstatement and the related effects of such on a previously issued financial statement are known and issuance of the subsequent period audited financial statements is not imminent, we believe that the auditor would need to also advise the agency’s management to promptly reissue the most recently issued fiscal year financial statements before issuing the current fiscal year financial statements and to communicate the reissuance to relevant users in writing as well as the public to clearly disclose the situation to them. If the agency’s management does not reissue the financial statements or communicate the reissuance as required, our position is that the auditor has the responsibility to notify the Congress, OMB, Treasury, and GAO in writing as well as any other users known to be relying on the previously issued financial statements. If the specific amount of the material misstatement and the related effects of such on a previously issued financial statement are known and issuance of the subsequent period audited financial statements is imminent, it is important that the auditor advise the agency’s management to issue restated financial statements as part of the current year comparative financial statements and disclose restatements in the audit report. If the specific amount of the misstatement and the related effect of such on a previously issued financial statement remain unknown when the current year financial statements are issued, it is necessary that the auditor disclose such information when issuing the audit report and modify or disclaim the opinion on the previously issued financial statements as appropriate. The issues we identified regarding the transparency and timeliness of restatement disclosures primarily resulted from insufficient guidance available during fiscal year 2004 to both the agencies’ management and their respective auditors for disclosure of the restatements and the timeliness of such disclosures. It will be important that those agencies needing, in the future, to restate their prior year financial statements ensure the adequacy of the disclosure and presentation of such restatements as well as timely notifying users known to be relying on the previously issued financial statements. It will also be important that the agencies’ financial statements and the related audit reports provide sufficient detail so that the reader will be able to gain at least a basic understanding of why the agencies needed to restate their previously issued financial statements and the effects of such on the agencies’ previously issued financial statements. The revision of OMB Circular No. A-136 during fiscal year 2005 addressed many of our concerns regarding the agencies’ disclosure of restatements; however, additional guidance is still needed. In this regard, we are making recommendations for further revisions to OMB Circular No. A-136 as well as OMB Bulletin No. 06-03. We have provided our views, as outlined in appendix I, on how OMB guidance could be further enhanced to ensure that future restatement disclosures are uniform and more transparent. We recommend that the Director of OMB direct the Controller of OMB’s Office of Federal Financial Management to incorporate the restatement guidance and requirements, as detailed in appendix I, into Circular No. A- 136 to assist OMB in addressing the issues we found with the agencies’ restatement disclosures and the timeliness of such disclosures. Appendix I incorporates seven recommendations as specific changes to Circular No. A-136 that focus on the timely disclosure by agency management of material misstatement(s) or potential material misstatement(s) and the related effect(s) of such in the previously issued financial statements and presentation and disclosure of restatements in the agencies’ MD&A and financial statements and related footnotes. We also recommend that the Director of OMB direct the Controller of OMB’s Office of Federal Financial Management to incorporate the restatement guidance and requirements, as detailed in appendix I, into Bulletin No. 06-03 to assist OMB in addressing the issues we found with auditors’ restatement disclosures and the timeliness of such disclosures. auditor’s timely disclosure of material misstatement(s) or potential material misstatement(s) and the related effect(s) of such in the previously issued financial statements and presentation and disclosure of restatements in the audit report. In oral comments on a draft of this report, OMB stated that it would take our recommendations under advisement, but that there were no current plans to update guidance that has been recently issued. OMB also noted that any future plans to update guidance would carefully consider issues already currently being addressed by the AICPA’s Codification of Auditing Standards. In addition, OMB provided some technical comments, which we have incorporated as appropriate. As noted in this report, we found inconsistent communications and insufficient disclosures of financial statement restatements by agency management and their auditors. As such, we reiterate our concern that it is critical for OMB to timely offer separate, though complementary, guidance to agency management and to agency auditors that provides more explicit and detailed guidance concerning their respective roles and responsibilities when an actual or potential material misstatement is identified in previously issued financial statements. Separate guidance is important because agency management and agency auditors have different roles and responsibilities. For example, management is responsible for preparing the financial statements and adjusting them to correct any material misstatements. The auditor is responsible for expressing or disclaiming an opinion on the financial statements prepared by management. The auditor has certain additional responsibilities should management not properly respond to actual or potential material misstatements. This report contains recommendations to the Director of OMB. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Homeland Security and Governmental Affairs; the Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs; the House Committee on Government Reform; and the Subcommittee on Government Management, Finance, and Accountability, House Committee on Government Reform. In addition, we are sending copies to the Secretary of the Treasury and the Fiscal Assistant Secretary of the Treasury. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by OMB and the nine CFO Act agencies’ management and their respective auditors throughout our work. We look forward to continuing to work with your office to help improve financial management in the federal government. If you or your staff have any questions about the contents of this report, please contact Gary T. Engel, Director, Financial Management and Assurance, at (202) 512-3406 or by e-mail at [email protected]. Staff acknowledgments are provided in appendix II. As noted throughout this report, we believe that additional restatement guidance is needed for both the agencies’ management and their respective auditors. To facilitate in this process, we are providing the following 11 requirements that, in our view, should be incorporated into Office of Management and Budget (OMB) Circular No. A-136 and OMB Bulletin No. 06-03. GAO recommends that the following requirement be added to sections II.4.3.1, II.4.4.1, II.4.5.1, II.4.6.1, and II.4.7.1 of OMB Circular No. A-136, Financial Reporting Requirements: Agencies shall label restated financial statements as “Restated.” GAO recommends that the “Management Actions Related to Corrections of Errors” subsection of section II.4.5.5 of OMB Circular No. A-136, Financial Reporting Requirements, be modified to read as follows: If the agency’s management becomes aware of a material misstatement(s) or potential material misstatement(s) affecting a previously issued financial statement(s), then the agency’s management, in coordination with their respective auditor, shall do the following: 1. Communicate the following information to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s). This includes communication (1) in writing to the Congress, OMB, Treasury, and GAO; (2) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) or potential material misstatement(s) are published; and (3) to OMB in the agency’s next quarterly financial statements and in subsequent quarterly financial statements until the specific amount(s) of the material misstatement(s) and the related effect(s) of such on the previously issued financial statement(s) are known and reported: (a) the nature and cause(s) of the known or likely material misstatement(s), (b) the amount(s) of known or likely material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., disclosure of the specific financial statement(s) and line item(s) affected). If this information is not known, then the disclosure includes information that is known and a statement that management cannot determine the amount(s) and the related effect(s) on the previously issued financial statement(s) without further investigation, and (c) a notice that (1) a previously issued financial statement(s) will or may be restated and, therefore, (2) the related auditor’s report is no longer reliable. 2. Promptly determine the financial statement effects of the known or potential material misstatement(s) on the previously issued financial statement(s). (a) If the specific amount(s) of the material misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) are known and issuance of the subsequent period audited financial statements is not imminent, then the agency’s management shall promptly: i. reissue the most recently issued fiscal year financial statements before issuing the current fiscal year’s financial statements; ii. communicate the reissuance to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s). This includes communication (a) in writing to the Congress, OMB, Treasury, and GAO and (b) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) are published; and iii. disclose the following information, at a minimum, in the agency’s restatement footnotes: 1. the nature and cause(s) of the misstatement(s) that led to the need for restatement, and 2. the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated, specific financial statement(s) affected and line items restated, actions the agency’s management took after discovering the misstatement), and the impact on the financial statements as a whole (e.g., change in overall net position, change in the audit opinion). (b) If the specific amount(s) of the material misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) are known and issuance of the subsequent period audited financial statements is imminent, then the agency’s management shall: i. issue restated financial statement(s) as part of the current year’s comparative financial statements; ii. communicate the restatement to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s). This includes communication (a) in writing to the Congress, OMB, Treasury, and GAO and (b) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) are published; and iii. disclose the following information, at a minimum, in the agency’s restatement footnote: 1. the nature and cause(s) of the misstatement(s) that led to the need for restatement, and 2. the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated, specific financial statement(s) affected and line items restated, actions the agency’s management took after discovering the misstatement), and the impact on the financial statements as a whole (e.g., change in overall net position, change in the audit opinion). (c) If the specific amount(s) of the misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) remain unknown when the current year’s financial statements are issued, then the agency’s management shall follow section II.4.5.5 (1) above and include the following, at a minimum, in its restatement footnote: i. a statement disclosing that a material misstatement(s) or potential material misstatement(s) affects a previously issued financial statement(s), but the specific amount(s) of the misstatement(s) and the related effect(s) of such are not known, ii. the nature and cause(s) of the misstatement(s) or potential misstatement(s), iii. an estimate of the magnitude of the misstatement(s) or potential misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) (e.g., disclosure of the specific financial statement(s) and line items affected) that are known and a statement that the specific amount(s) and the related effect(s) of such cannot be determined without further investigation, and iv. a statement disclosing that a restatement(s) to a previously issued financial statement(s) will or may occur. GAO also recommends that the following requirement be added to the “Corrections of Errors” subsection of section II.4.5.5 of OMB Circular No. A-136, Financial Reporting Requirements: The Statement of Changes in Net Position’s current year’s unadjusted beginning balances shall agree with the restated ending balances on the agency’s prior year’s Statement of Changes in Net Position. GAO recommends that section II.4.10.43, of OMB Circular No. A-136, Financial Reporting Requirements, be revised to: clarify the definition of the “nature” of an error, include an explanation that the disclosure of the “amounts being restated” specifically refers to the disclosure of the specific line items restated and the related amounts, and clarify how an agency should specifically further discuss the actions management took after discovering the error. GAO recommends that the following requirement be added to section II.2.7 of OMB Circular No. A-136, Financial Reporting Requirements, which discusses guidance for information included in the Management Discussion and Analysis (MD&A): Agency’s management shall disclose the existence of restatements in its MD&A if the agency asserts in its MD&A that it received an unqualified opinion on any previously issued financial statement and that respective financial statement was subsequently restated. GAO recommends that section 5.2 of OMB Bulletin No. 06-03, Audit Requirements for Federal Financial Statements, be modified to read as follows: 5.2 previously issued audited financial statement(s) may lead the auditor to believe that the auditor’s report would or could reasonably have been affected if the auditor had known of the misstatement(s) when the auditor issued the auditor’s report. When this condition exists, the auditor shall advise management to communicate the following information to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s): The nature or amount of known or likely misstatement(s) in the nature and cause(s) of the known or likely material misstatement(s), the amount(s) of known or likely material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., disclosure of the specific financial statement(s) and line item(s) affected). If this information is not known, then the disclosure includes information that is known and a statement that management cannot determine the amount(s) and the related effect(s) on the previously issued financial statement(s) without further investigation, and a notice that (1) a previously issued financial statement(s) will or may be restated and, therefore, (2) the related auditor’s report is no longer reliable. This includes communication (1) in writing to the Congress, OMB, Treasury, and GAO; (2) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) or potential material misstatement(s) are published; and (3) to OMB in the agency’s next quarterly financial statements and in subsequent quarterly financial statements until the specific amount(s) of the material misstatement(s) and the related effect(s) of such on the previously issued financial statement(s) are known and reported. GAO also recommends that the following requirements be added to section 5 of OMB Bulletin No. 06-03, Audit Requirements for Federal Financial Statements, as follows: 5.3 communication information about the known or potential material misstatement(s) to report users, including those charged with governance, oversight bodies, and funding agencies. When performing this review, the auditor shall consider whether (1) management acted timely to determine the financial statement effects of the potential material misstatement(s), (2) management acted timely to communicate with appropriate parties, and (3) management disclosed the nature and extent of the known or likely material misstatement(s) on Internet pages where the agency’s previously issued financial statements are published. The auditor shall review the adequacy of management’s 5.4 auditor believes that management is unduly delaying its determination of the effect(s) of the misstatement(s) on a previously issued financial statement(s). The auditor shall notify those charged with governance if the 5.5 management’s decision whether to issue restated financial statement(s). Management may separately issue the restated financial statement(s) or may present the restated financial statement(s) on a comparative basis with those of a subsequent period. Ordinarily, the auditor would expect management to issue restated financial statement(s) as soon as practicable. However, it may not be necessary for management to separately issue the restated financial statement(s) and the auditor’s report when issuance of the subsequent period audited financial statements is imminent. The auditor shall evaluate the timeliness and appropriateness of 5.6 potential misstatement(s) affecting a previously issued financial statement(s), then the auditor shall advise the agency’s management to determine the specific amount(s) of the material misstatement(s) or potential material misstatement(s) and the related effect(s) of such on the previously issued financial statement(s) as soon as reasonably possible. If the auditor becomes aware of a material misstatement(s) or 5.7 related effect(s) of such on a previously issued financial statement(s) are known and the issuance of the subsequent period audited financial statements is not imminent, then the auditor shall advise the agency’s management to promptly: If the specific amount(s) of the material misstatement(s) and the reissue the most recently issued fiscal year financial statements before issuing the current fiscal year’s financial statements; communicate the reissuance to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s). This includes communication (1) in writing to the Congress, OMB, Treasury, and GAO and (2) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) are published; and disclose the following information, at a minimum, in the agency’s restatement footnotes: (1) the nature and cause(s) of the misstatement(s) that led to the need for restatement, and (2) the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated, specific financial statement(s) affected and line items restated, actions the agency’s management took after discovering the misstatement), and the impact on the financial statements as a whole (e.g., change in overall net position, change in the audit opinion). 5.8 If the specific amount(s) of the material misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) are known and issuance of the subsequent period audited financial statements is imminent, then the auditor shall disclose restatements in the auditor’s report as listed in 7.7 and advise agency’s management to: issue restated financial statement(s) as part of the current year’s comparative financial statements; communicate the restatement to those charged with governance, oversight bodies, funding agencies, and others who are relying or are likely to rely on the financial statement(s). This includes communication (a) in writing to the Congress, OMB, Treasury, and GAO and (b) to the public on the Internet pages where the agency’s previously issued financial statements that were affected by the material misstatement(s) are published; and disclose the following information, at a minimum, in the agency’s restatement footnote: (1) the nature and cause(s) of the misstatement(s) that led to the need for restatement and (2) the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated, specific financial statement(s) affected and line items restated, actions the agency’s management took after discovering the misstatement), and the impact on the financial statements as a whole (e.g., change in overall net position, change in the audit opinion). 5.9 If the specific amount(s) of the misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) remain unknown when the current year’s financial statements are issued, then the auditor shall follow 7.8 when issuing the auditor’s report and advise the agency’s management as required in 5.2. 5.10 bodies, and funding agencies when management (1) does not take the necessary steps to promptly inform report users of the situation or (2) does not restate with appropriate timeliness the financial statements in circumstances when the auditor believes they need to be restated. The auditor shall inform these parties that the auditor will take steps to prevent future reliance on the auditor’s report. The steps taken will depend on the facts and circumstances, including legal considerations. This includes communication in writing to the Congress, OMB, Treasury, and GAO as well as any other users known to be relying on the previously issued financial statement(s). The auditor shall notify those charged with governance, oversight GAO recommends that section 7.7 of OMB Bulletin No. 06-03, Audit Requirements for Federal Financial Statements, be modified to read as follows: 7.7 When management restates a previously issued financial statement(s), the auditor shall perform audit procedures sufficient to reissue or update the auditor’s report on the restated financial statement(s). The auditor shall fulfill these responsibilities whether the restated financial statement(s) are separately issued or presented on a comparative basis with those of a subsequent period. The auditor shall include the following information in an explanatory paragraph in the reissued or updated auditor’s report on the restated financial statement(s): a statement disclosing that a previously issued financial statement(s) a statement that the previously issued financial statement(s) was materially misstated and that the previously issued auditor’s report (including report date) is withdrawn and replaced by the auditor’s report on the restated financial statement(s), a reference to the note(s) to the restated financial statement(s) that a description of the following if not already provided in the note(s) to the financial statement(s): (1) the nature and cause(s) of the misstatement(s) that led to the need for restatement and (2) the specific amount(s) of the material misstatement(s) and the related effect(s) on the previously issued financial statement(s) (e.g., year(s) being restated and the specific financial statement(s) affected and line items restated) and the impact on the financial statements as a whole (e.g., change in overall net position, change in the audit opinion), and a discussion of any significant internal control deficiency that failed to prevent or detect the misstatement and what action management has taken about the deficiency. GAO also recommends that the following requirements be added to section 7 of OMB Bulletin No. 06-03, Audit Requirements for Federal Financial Statements, as follows: 7.8 misstatement(s) or potential material misstatement(s) has been identified in any of the previously issued financial statements and the specific amount(s) of the misstatement(s) and the related effect(s) of such are not known, then the auditor shall update the auditor’s report on the previously issued financial statement(s) as appropriate. Furthermore, the auditor’s report shall disclose, at a minimum, the following: If at the time of issuance of the auditor’s report a material a statement disclosing that a material misstatement(s) or potential material misstatement(s) affects a previously issued financial statement(s) but the specific amount(s) of the misstatement(s) and the related effect(s) of such are not known; a reference to note(s) to the financial statements that discusses the restatement or potential restatement; a description of the following, if not already provided in the agency’s note(s) to the financial statements: (1) the nature and cause(s) of the misstatement(s) or potential misstatement(s), and (2) an estimate of the magnitude of the misstatement(s) or potential misstatement(s) and the related effect(s) of such on a previously issued financial statement(s) (e.g., disclosure of the specific financial statement(s) and line items affected) that are known and a statement that the specific amount(s) and the related effect(s) of such cannot be determined without further investigation; and a statement disclosing that a restatement(s) to a previously issued financial statement(s) will or may occur. Gary T. Engel, (202) 512-3406. Arthur W. Brouk, Alberto Garza, Michael D. Hansen, Malissa Livingston, and Michelle Philpott made key contributions to this report. Fiscal Year 2004 U.S. Government Financial Statements: Sustained Improvement in Federal Financial Management Is Crucial to Addressing Our Nation’s Future Fiscal Challenges. GAO-05-284T. Washington, D.C.: February 9, 2005. Financial Audit: Restatements to the Department of State’s Fiscal Year 2003 Financial Statements. GAO-05-814R. Washington, D.C.: September 20, 2005. Financial Audit: Restatements to the Nuclear Regulatory Commission’s Fiscal Year 2003 Financial Statements. GAO-06-30R. Washington, D.C.: October 27, 2005. Financial Audit: Restatement to the General Services Administration’s Fiscal Year 2003 Financial Statements. GAO-06-70R. Washington, D.C.: December 6, 2005. Financial Audit: Restatements to the National Science Foundation’s Fiscal Year 2003 Financial Statements. GAO-06-229R. Washington, D.C.: December 22, 2005. Financial Audit: Restatements to the Department of Agriculture’s Fiscal Year 2003 Consolidated Financial Statements. GAO-06-254R. Washington, D.C.: January 26, 2006. Fiscal Year 2005 U.S. Government Financial Statements: Sustained Improvement in Federal Financial Management is Crucial to Addressing Our Nation’s Financial Condition and Long-term Fiscal Imbalance. GAO-06-406T. Washington, D.C.: March 1, 2006. | GAO continues to have concerns about restatements to federal agencies' previously issued financial statements. During fiscal year 2005, at least 7 of the 24 Chief Financial Officers (CFO) Act agencies restated certain of their fiscal year 2004 financial statements to correct misstatements. To study this trend, GAO reviewed the nature and causes of the restatements made by certain CFO Act agencies in fiscal year 2004 to their fiscal year 2003 financial statements. Eleven CFO Act agencies had restatements for fiscal year 2003. Nine of those 11 received unqualified opinions on their originally issued fiscal year 2003 financial statements. GAO's view is that users of federal agencies' financial statements and the related audit reports need to be provided at least a basic understanding of why a restatement was necessary and its effect on the agencies' previously issued financial statements and related audit reports. This report communicates GAO's observations on the transparency and timeliness of the 9 federal agencies' and their auditors' restatement disclosures. The nine agencies GAO reviewed did not consistently communicate financial statement restatements. GAO found that all nine agencies could have greatly enhanced the adequacy, effectiveness, and timeliness of their restatement disclosures to users. Similar transparency issues existed with the associated audit reports regarding disclosure of all the essential information that would clearly explain the restatements. GAO highlighted the following issues as among the more prevalent issues to be addressed: 1) columns of the agencies' restated financial statements were not labeled as "Restated"; 2) agencies' restatement footnote disclosures lacked clarity or sufficient detail regarding the nature of the restatements and the effect on balances reported in previously issued financial statements; 3) restatement information was not sufficiently disclosed in the agencies' Management Discussion and Analysis; 4) audit reports did not disclose that the respective agencies had restated certain of their fiscal year 2003 financial statements; 5) audit reports did not provide a statement that the previously issued audit report was withdrawn and replaced by the opinion on the restated financial statements; and 6) material misstatements and potential material misstatements were not timely communicated by agencies to either their auditors or to the users of the financial statements. The primary contributing factor for the restatement disclosure issues that GAO identified was insufficient guidance available at the time to both the agencies' management and their respective auditors for disclosure of the restatements and the timeliness of such disclosures. GAO believes that information regarding restatements should be disclosed in a transparent and timely manner consistent with the qualitative characteristics of information in financial reports described in Statement of Federal Financial Accounting Concepts (SFFAC) No. 1. In GAO's view, more detailed accounting and auditing guidance on how to satisfy the financial reporting characteristics as outlined in SFFAC No. 1 as it relates to the disclosure of restatements would have been helpful. OMB revised Circular No. A-136, Financial Reporting Requirements, which provides additional guidance to federal agencies' management regarding disclosure of restatements to previously issued financial statements. Revisions made to OMB Circular No. A-136 address many of GAO's concerns regarding the agencies' disclosure of restatements. In addition, the proposed 2006 revision of generally accepted government auditing standards now includes a section on reporting on restatement of previously issued financial statements. In addition, on August 23, 2006, OMB issued Bulletin No. 06-03, which also provides some information regarding reporting on restatements. However, GAO believes that OMB needs to timely provide additional, though complementary, restatement guidance to both the agencies' management and their respective auditors. |
Pursuant to Homeland Security Presidential Directive 6, the Attorney General established TSC in September 2003 to consolidate the government’s approach to terrorism screening and provide for the appropriate and lawful use of terrorist information in screening processes. TSC’s consolidated watch list is the U.S. government’s master repository for all records of known or appropriately suspected international and domestic terrorists used for watch list-related screening. When an individual makes an airline reservation, arrives at a U.S. port of entry, or applies for a U.S. visa, or is stopped by state or local police within the United States, the frontline screening agency or airline conducts a name-based search of the individual against applicable terrorist watch list records. In general, when the computerized name-matching system of an airline or screening agency generates a “hit” (a potential name match) against a watch list record, the airline or agency is to review each potential match. Any obvious mismatches (negative matches) are to be resolved by the airline or agency, if possible, as discussed in our September 2006 report on terrorist watch list screening. However, clearly positive or exact matches and matches that are inconclusive (difficult to verify) generally are to be referred to TSC to confirm whether the individual is a match to the watch list record. TSC is to refer positive and inconclusive matches to the FBI to provide an opportunity for a counterterrorism response. Deciding what action to take, if any, can involve collaboration among the frontline screening agency, the National Counterterrorism Center or other intelligence community members, and the FBI or other investigative agencies. If necessary, a member of an FBI Joint Terrorism Task Force can respond in person to interview and obtain additional information about the person encountered. In other cases, the FBI will rely on the screening agency and other law enforcement agencies—such as U.S. Immigration and Customs Enforcement—to respond and collect information. Figure 1 presents a general overview of the process used to resolve encounters with individuals on the terrorist watch list. To build upon and provide additional guidance related to Homeland Security Presidential Directive 6, in August 2004, the President signed Homeland Security Presidential Directive 11. Among other things, this directive required the Secretary of Homeland Security—in coordination with the heads of appropriate federal departments and agencies—to submit two reports to the President (through the Assistant to the President for Homeland Security) related to the government’s approach to terrorist- related screening. The first report was to outline a strategy to enhance the effectiveness of terrorist-related screening activities by developing comprehensive and coordinated procedures and capabilities. The second report was to provide a prioritized investment and implementation plan for detecting and interdicting suspected terrorists and terrorist activities. Specifically, the plan was to describe the “scope, governance, principles, outcomes, milestones, training objectives, metrics, costs, and schedule of activities” to implement the U.S. government’s terrorism-related screening policies. The National Counterterrorism Center and the FBI rely upon standards of reasonableness in determining which individuals are appropriate for inclusion on TSC’s consolidated watch list. In accordance with Homeland Security Presidential Directive 6, TSC’s watch list is to contain information about individuals “known or appropriately suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism.” In implementing this directive, the National Counterterrorism Center and the FBI strive to ensure that individuals who are reasonably suspected of having possible links to terrorism—in addition to individuals with known links—are nominated for inclusion on the watch list. To determine if the suspicions are reasonable, the National Counterterrorism Center and the FBI are to assess all available information on the individual. According to the National Counterterrorism Center, determining whether to nominate an individual can involve some level of subjectivity. Nonetheless, any individual reasonably suspected of having links to terrorist activities is to be nominated to the list and remain on it until the FBI or the agency that supplied the information supporting the nomination, such as one of the intelligence agencies, determines the person is not a threat and should be removed from the list. Moreover, according to the FBI, individuals who are subjects of ongoing FBI counterterrorism investigations are generally nominated to TSC for inclusion on the watch list, including persons who are being preliminarily investigated to determine if they have links to terrorism. In determining whether to open an investigation, the FBI uses guidelines established by the Attorney General. These guidelines contain specific standards for opening investigations, including formal review and approval processes. According to FBI officials, there must be a “reasonable indication” of involvement in terrorism before opening an investigation. The FBI noted, for example, that it is not sufficient to open an investigation based solely on a neighbor’s complaint or an anonymous tip or phone call. If an investigation does not establish a terrorism link, the FBI generally is to close the investigation and request that TSC remove the person from the watch list. Based on these standards, the number of records in TSC’s consolidated watch list has increased from about 158,000 records in June 2004 to about 755,000 records as of May 2007 (see fig. 2). It is important to note that the total number of records in TSC’s watch list does not represent the total number of individuals on the watch list. Rather, if an individual has one or more known aliases, the watch list will contain multiple records for the same individual. TSC’s watch list database is updated daily with new nominations, modifications to existing records, and deletions. Because individuals can be added to the list based on reasonable suspicion, inclusion on the list does not automatically prohibit an individual from, for example, obtaining a visa or entering the United States when the person is identified by a screening agency. Rather, when an individual on the list is encountered, agency officials are to assess the threat the person poses to determine what action to take, if any. From December 2003 (when TSC began operations) through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records approximately 53,000 times, according TSC data. A breakdown of these encounters shows that the number of matches has increased each year—from 4,876 during the first 10-month period of TSC’s operations to 14,938 during fiscal year 2005, to 19,887 during fiscal year 2006. This increase can be attributed partly to the growth in the number of records in the consolidated terrorist watch list and partly to the increase in the number of agencies that use the list for screening purposes. Our analysis of TSC data also indicates that many individuals were encountered multiple times. For example, a truck driver who regularly crossed the U.S.-Canada border or an individual who frequently took international flights could each account for multiple encounters. Further, TSC data show that the highest percentage of encounters involved screening within the United States by a state or local law enforcement agency, U.S. government investigative agency, or other governmental entity. The next highest percentage involved border-related encounters, such as passengers on airline flights inbound from outside the United States or individuals screened at land ports of entry. The lowest percentage of encounters occurred outside of the United States. The watch list has enhanced the U.S. government’s counterterrorism efforts by allowing federal, state, and local screening and law enforcement officials to obtain information to help them make better-informed decisions during encounters regarding the level of threat a person poses and the appropriate response to take, if any. The specific outcomes of encounters with individuals on the watch list are based on the government’s overall assessment of the intelligence and investigative information that supports the watch list record and any additional information that may be obtained during the encounter. Our analysis of data on the outcomes of encounters revealed that agencies took a range of actions, such as arresting individuals, denying others entry into the United States, and most commonly, releasing the individuals following questioning and information gathering. TSC data show that agencies reported arresting many subjects of watch list records for various reasons, such as the individual having an outstanding arrest warrant or the individual’s behavior or actions during the encounter. TSC data also indicated that some of the arrests were based on terrorism grounds. TSC data show that when visa applicants were positively matched to terrorist watch list records, the outcomes included visas denied, visas issued (because the consular officer did not find any statutory basis for inadmissibility), and visa ineligibility waived. Transportation Security Administration data show that when airline passengers were positively matched to the No Fly or Selectee lists, the vast majority of matches were to the Selectee list. Other outcomes included individuals matched to the No Fly list and denied boarding (did not fly) and individuals matched to the No Fly list after the aircraft was in flight. Additional information on individuals on the watch list passing undetected through agency screening is presented later in this statement. U.S. Customs and Border Protection data show that a number of nonimmigrant aliens encountered at U.S. ports of entry were positively matched to terrorist watch list records. For many of the encounters, the agency determined there was sufficient information related to watch list records to preclude admission under terrorism grounds. However, for most of the encounters, the agency determined that there was not sufficient information related to the records to preclude admission. TSC data show that state or local law enforcement officials have encountered individuals who were positively matched to terrorist watch list records thousands of times. Although data on the actual outcomes of these encounters were not available, the vast majority involved watch list records that indicated that the individuals were released, unless there were reasons other than terrorism-related grounds for arresting or detaining the individuals, such as the individual having an outstanding arrest warrant. Also, according to federal officials, encounters with individuals who were positively matched to the watch list assisted government efforts in tracking the respective person’s movements or activities and provided the opportunity to collect additional information about the individual. The information collected was shared with agents conducting counterterrorism investigations and with the intelligence community for use in analyzing threats. Such coordinated collection of information for use in investigations and threat analyses is one of the stated policy objectives for the watch list. The principal screening agencies whose missions most frequently and directly involve interactions with travelers do not check against all records in TSC’s consolidated watch list because screening against certain records (1) may not be needed to support the respective agency’s mission, (2) may not be possible due to the requirements of computer programs used to check individuals against watch list records, or (3) may not be operationally feasible. Rather, each day, TSC exports applicable records from the consolidated watch list to federal government databases that agencies use to screen individuals for mission-related concerns. For example, the database that U.S. Customs and Border Protection uses to check incoming travelers for immigration violations, criminal histories, and other matters contained the highest percentage of watch list records as of May 2007. This is because its mission is to screen all travelers, including U.S. citizens, entering the United States at ports of entry. The database that the Department of State uses to screen applicants for visas contained the second highest percentage of all watch list records. This database does not include records on U.S. citizens and lawful permanent residents because these individuals would not apply for U.S. visas. The FBI database that state and local law enforcement agencies use for screening contained the third highest percentage of watch list records. According to the FBI, the remaining records were not included in this database primarily because they did not contain sufficient identifying information on the individual, which is required to minimize instances of individuals being misidentified as being subjects of watch list records. Further, the No Fly and Selectee lists disseminated by the Transportation Security Administration to airlines for use in prescreening passengers contained the lowest percentage of watch list records. The lists did not contain the remaining records either because they (1) did not meet the nomination criteria for the No Fly or Selectee list or (2) did not contain sufficient identifying information on the individual. According to the Department of Homeland Security, increasing the number of records used to prescreen passengers would expand the number of misidentifications to unjustifiable proportions without a measurable increase in security. While we understand the FBI’s and the Department of Homeland Security’s concerns about misidentifications, we still believe it is important that federal officials assess the extent to which security risks exist by not screening against certain watch list records and what actions, if any, should be taken in response. Also, Department of Homeland Security component agencies are taking steps to address instances of individuals on the watch list passing undetected through agency screening. For example, U.S. Customs and Border Protection has encountered situations where it identified the subject of a watch list record after the individual had been processed at a port of entry and admitted into the United States. U.S. Customs and Border Protection has created a working group within the agency to study the causes of this vulnerability and has begun to implement corrective actions. U.S. Citizenship and Immigration Services—the agency responsible for screening persons who apply for U.S. citizenship or immigration benefits—has also acknowledged areas that need improvement in the processes used to detect subjects of watch list records. According to agency representatives, each instance of an individual on the watch list getting through agency screening is reviewed to determine the cause, with appropriate follow-up and corrective action taken, if needed. The agency is also working with TSC to enhance screening effectiveness. Further, Transportation Security Administration data show that in the past, a number of individuals who were on the government’s No Fly list passed undetected through airlines’ prescreening of passengers and flew on international flights bound to or from the United States. The individuals were subsequently identified in-flight by U.S. Customs and Border Protection, which checks passenger names against watch list records to help the agency prepare for the passengers’ arrival in the United States. However, the potential onboard security threats posed by the undetected individuals required an immediate counterterrorism response, which in some instances resulted in diverting the aircraft to a new location. According to the Transportation Security Administration, such incidents were subsequently investigated and, if needed, corrective action was taken with the respective air carrier. In addition, U.S. Customs and Border Protection has issued a final rule that should better position the government to identify individuals on the No Fly list before an international flight is airborne. For domestic flights within the United States, there is no second screening opportunity—like the one U.S. Customs and Border Protection conducts for international flights. The government plans to take over from air carriers the function of prescreening passengers prior to departure against watch list records for both international and domestic flights. Also, TSC has ongoing initiatives to help reduce instances of individuals on the watch list passing undetected through agency screening, including efforts to improve computerized name-matching programs. Although the federal government has made progress in using the consolidated watch list for screening purposes, additional opportunities exist for using the list. Internationally, the Department of State has made progress in making bilateral arrangements to share terrorist screening information with certain foreign governments. The department had two such arrangements in place before September 11, 2001. More recently, the department has made four new arrangements and is in negotiations with several other countries. Also, the Department of Homeland Security has made progress in using watch list records to screen employees in some critical infrastructure components of the private sector, including certain individuals who have access to vital areas of nuclear power plants, work in airports, or transport hazardous materials. However, many critical infrastructure components are not using watch list records. The Department of Homeland Security has not, consistent with Homeland Security Presidential Directive 6, finalized guidelines to support private sector screening processes that have a substantial bearing on homeland security. Finalizing such guidelines would help both the private sector and the Department of Homeland Security ensure that private sector entities are using watch list records consistently, appropriately, and effectively to protect their workers, visitors, and key critical assets. Further, federal departments and agencies have not identified all appropriate opportunities for which terrorist-related screening will be applied, in accordance with presidential directives. A primary reason why screening opportunities remain untapped is because the government lacks an up-to-date strategy and implementation plan— supported by a clearly defined leadership or governance structure—for enhancing the effectiveness of terrorist-related screening, consistent with presidential directives. Without an up-to-date strategy and plan, agencies and organizations that conduct terrorist-related screening activities do not have a foundation for a coordinated approach that is driven by an articulated set of core principles. Furthermore, lacking clearly articulated principles, milestones, and outcome measures, the federal government is not easily able to provide accountability and a basis for monitoring to ensure that (1) the intended goals for, and expected results of, terrorist screening are being achieved and (2) use of the list is consistent with privacy and civil liberties. These plan elements, which were prescribed by presidential directives, are crucial for coordinated and comprehensive use of terrorist-related screening data, as they provide a platform to establish governmentwide priorities for screening, assess progress toward policy goals and intended outcomes, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness. Although all elements of a strategy and implementation plan cited in presidential directives are important to guide realization of the most effective use of watch list data, addressing governance is particularly vital, as achievement of a coordinated and comprehensive approach to terrorist- related screening involves numerous entities within and outside the federal government. However, no clear lines of responsibility and authority have been established to monitor governmentwide screening activities for shared problems and solutions or best practices. Neither does any existing entity clearly have the requisite authority for addressing various governmentwide issues—such as assessing common gaps or vulnerabilities in screening processes and identifying, prioritizing, and implementing new screening opportunities. Thus, it is important that the Assistant to the President for Homeland Security and Counterterrorism address these deficiencies by ensuring that an appropriate governance structure has clear and adequate responsibility and authority to (a) provide monitoring and analysis of watch list screening efforts governmentwide, (b) respond to issues that hinder effectiveness, and (c) assess progress toward intended outcomes. Managed by TSC, the consolidated terrorist watch list represents a major step forward from the pre-September 11 environment of multiple, disconnected, and incomplete watch lists throughout the government. Today, the watch list is an integral component of the U.S. government’s counterterrorism efforts. However, our work indicates that there are additional opportunities for reducing potential screening vulnerabilities, expanding use of the watch list, and enhancing management oversight. Thus, we have made several recommendations to the heads of relevant departments and agencies. Our recommendations are intended to help (1) mitigate security vulnerabilities in terrorist watch list screening processes that arise when screening agencies do not use certain watch list records and (2) optimize the use and effectiveness of the watch list as a counterterrorism tool. Such optimization should include development of guidelines to support private sector screening processes that have a substantial bearing on homeland security, as well as development of an up-to-date strategy and implementation plan for using terrorist-related information. Further, to help ensure that governmentwide terrorist-related screening efforts are effectively coordinated, we have also recommended that the Assistant to the President for Homeland Security and Counterterrorism ensure that an appropriate leadership or governance structure has clear lines of responsibility and authority. In commenting on a draft of our report, which provides the basis for my statement at today’s hearing, the Department of Homeland Security noted that it agreed with and supported our work and stated that it had already begun to address issues identified in our report’s findings. The FBI noted that the database state and local law enforcement agencies use for screening does not contain certain watch list records primarily to minimize instances of individuals being misidentified as subjects of watch list records. Because of this operational concern, the FBI noted that our recommendation to assess the extent of vulnerabilities in current screening processes has been completed and the vulnerability has been determined to be low or nonexistent. In our view, however, recognizing operational concerns does not constitute assessing vulnerabilities. Thus, while we understand the FBI’s operational concerns, we maintain it is still important that the FBI assess to what extent security risks are raised by not screening against certain watch list records and what actions, if any, should be taken in response. Also, the FBI noted that TSC’s governance board is the appropriate forum for obtaining a commitment from all of the entities involved in the watch-listing process. However, as discussed in our report, TSC’s governance board is responsible for providing guidance concerning issues within TSC’s mission and authority and would need additional authority to provide effective coordination of terrorist-related screening activities and interagency issues governmentwide. The Homeland Security Council was provided a draft of the report but did not provide comments. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members have at this time. For questions regarding this testimony, please contact me at (202) 512- 8777 or [email protected]. Other key contributors to this statement were Danny R. Burton, Virginia A. Chanley, R. Eric Erdman, Michele C. Fejfar, Jonathon C. Fremont, Kathryn E. Godfrey, Richard B. Hung, Thomas F. Lombardi, Donna L. Miller, and Ronald J. Salo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Federal Bureau of Investigation's (FBI) Terrorist Screening Center (TSC) maintains a consolidated watch list of known or appropriately suspected terrorists and sends records from the list to agencies to support terrorism-related screening. This testimony discusses (1) standards for including individuals on the list, (2) the outcomes of encounters with individuals on the list, (3) potential vulnerabilities in screening processes and efforts to address them, and (4) actions taken to promote effective terrorism-related screening. This statement is based on GAO's report (GAO-08-110). To accomplish the objectives, GAO reviewed documentation obtained from and interviewed officials at TSC, the FBI, the National Counterterrorism Center, the Department of Homeland Security, and other agencies that perform terrorism-related screening. The FBI and the intelligence community use standards of reasonableness to evaluate individuals for nomination to the consolidated terrorist watch list. In general, individuals who are reasonably suspected of having possible links to terrorism--in addition to individuals with known links--are to be nominated. As such, being on the list does not automatically prohibit, for example, the issuance of a visa or entry into the United States. Rather, when an individual on the list is encountered, agency officials are to assess the threat the person poses to determine what action to take, if any. As of May 2007, the consolidated watch list contained approximately 755,000 records. From December 2003 through May 2007, screening and law enforcement agencies encountered individuals who were positively matched to watch list records approximately 53,000 times. Many individuals were matched multiple times. The outcomes of these encounters reflect an array of actions, such as arrests; denials of entry into the United States; and, most often, questioning and release. Within the federal community, there is general agreement that the watch list has helped to combat terrorism by (1) providing screening and law enforcement agencies with information to help them respond appropriately during encounters and (2) helping law enforcement and intelligence agencies track individuals on the watch list and collect information about them for use in conducting investigations and in assessing threats. Regarding potential vulnerabilities, TSC sends records daily from the watch list to screening agencies. However, some records are not sent, partly because screening against them may not be needed to support the respective agency's mission or may not be possible due to the requirements of computer programs used to check individuals against watch list records. Also, some subjects of watch list records have passed undetected through agency screening processes and were not identified, for example, until after they had boarded and flew on an aircraft or were processed at a port of entry and admitted into the United States. TSC and other federal agencies have ongoing initiatives to help reduce these potential vulnerabilities, including efforts to improve computerized name-matching programs and the quality of watch list data. Although the federal government has made progress in promoting effective terrorism-related screening, additional screening opportunities remain untapped--within the federal sector, as well as within critical infrastructure components of the private sector. This situation exists partly because the government lacks an up-to-date strategy and implementation plan for optimizing use of the terrorist watch list. Also lacking are clear lines of authority and responsibility. An up-to-date strategy and implementation plan, supported by a clearly defined leadership or governance structure, would provide a platform to establish governmentwide screening priorities, assess progress toward policy goals and intended outcomes, consider factors related to privacy and civil liberties, ensure that any needed changes are implemented, and respond to issues that hinder effectiveness. |
The Social Security Administration (SSA) operates the Disability Insurance (DI) and Supplemental Security Income (SSI) programs—the nation’s two largest federal programs providing cash benefits to people with disabilities. From 1985 through 1994, the number of working-age DI and SSI beneficiaries (aged 18 to 64) increased 59 percent, from 4.0 million to 6.3 million, and cash benefits (adjusted for inflation) increased 66 percent.This magnitude of growth has caused concerns that are compounded by the fact that less than half of 1 percent of DI beneficiaries ever leave the rolls by returning to work. In our recent study of SSA’s disability programs, we reported that despite the magnitude of program growth, SSA has not improved its emphasis and efforts in returning disability beneficiaries to the workplace. By contrast, the private sector, in response to growth in disability, has begun developing and implementing strategies to improve return-to-work programs for disabled workers. Moreover, the emphasis on return to work is not limited to the private sector in the United States—disability programs financed by social insurance systems in other countries also focus on return to work and have implemented practices similar to those in the U.S. private sector. This report focuses on identifying return-to-work practices in the private sector and other countries that may hold lessons for improving SSA’s return-to-work efforts. Improving SSA’s return-to-work efforts has important implications not only for the individuals who can return to productive activity in the workplace, but also for controlling the costs of federal disability programs. SSA estimates that lifetime cash benefit payments are reduced by about $60,000 when a DI beneficiary leaves the rolls by returning to work and by about $30,000 when an SSI disability beneficiary leaves the rolls by returning to work. In comparison with the workers served by private sector programs, many people with disabilities served by SSA have little or no work history or current job skills. SSA also serves a population with a wide range of disabilities that often may be more severe than the disabilities of the average person served by private sector programs. For example, many workers served by private sector programs have short-term disabilities, which SSA’s programs do not cover. SSA serves people with long-term disabilities, many of whom have not been successful in returning to work through private sector programs. Thus, SSA may face greater difficulty in returning some of its clients to the workplace. However, the experiences of Germany and Sweden show that return-to-work strategies are applicable to a population with a wide range of work histories, job skills, and disabilities. Moreover, even relatively small gains in return-to-work successes offer the potential for significant savings in program outlays. For example, if an additional 1 percent of the 6.3 million beneficiaries were to leave SSA’s disability rolls by returning to work, lifetime cash benefits would be reduced by an estimated $2.9 billion. The magnitude of disability costs has caused growing concern in the private sector. Some disability-related costs borne by the private sector are more obvious than others. The most apparent costs include insurance premiums, cash benefits, rehabilitation benefits, and medical benefits paid through workers’ compensation and employer-sponsored disability insurance programs. Workers’ compensation laws require employers to bear the cost of disabilities caused by an individual’s job, and some employers offer short-term or long-term insurance or both for disabilities not caused by the individual’s job. However, in addition to the costs of such programs, there may be other, less obvious costs such as payments to employees who must work overtime, the added expense of training and using temporary workers, and retraining disabled employees when they return to work. Taking such costs into account, studies have estimated that the employer’s full cost of disability ranges from 6 to 12 percent of payroll. At one time, the common business practice was to encourage someone with a disability to leave the workforce. In recent years, however, concern has grown about the effect of disability on costs, productivity, competitiveness, and employee and customer relations. As a result, the private sector has begun to develop and implement strategies for helping disabled workers return to work as quickly as possible. These efforts include intervening as soon as possible after a disabling event occurs, helping the worker set return-to-work goals, providing the services the worker needs to return to work, and offering incentives that encourage return to work. Similar approaches have also been implemented in the social insurance disability programs of other countries. To develop information on private sector return-to-work practices for this report, we surveyed 21 people from the private sector recognized for their involvement in developing disability management programs that focus on return to work. As well as working to develop return-to-work programs within their own companies, all 21 have been actively involved in efforts by the Washington Business Group on Health or the Health Insurance Association of America to develop and promote such programs. As a group, these 21 individuals represented extensive experience in managing disability under workers’ compensation and disability insurance programs. We conducted in-depth interviews with five respondents to supplement the survey responses. (See app. I for a list of individuals contacted during our review.) Technological, medical, and societal changes have increased the potential for more people with disabilities to work, and some SSA data indicate that as many as 3 out of 10 persons on the disability rolls may be good candidates for return to work. However, few beneficiaries ever leave the rolls by returning to work. For example, less than half of 1 percent of the beneficiaries have left the DI program annually during the last several years because they returned to work, according to SSA data. As we recently reported, SSA focuses little attention on returning beneficiaries to the workplace. SSA’s capacity to identify and assist in expanding beneficiaries’ productive capacities have been limited by weaknesses in the design and implementation of the DI and SSI programs. SSA does not have a system for functionally evaluating each individual’s return-to-work potential and identifying the return-to-work services needed by those who have the potential to return to the workplace. Instead, SSA’s primary focus is on processing disability applications to determine whether applicants meet disability criteria and then paying benefits to those found eligible. The DI and SSI programs pay disability benefits to people who have long-term disabilities. To be eligible for benefits, an adult must have a medically determinable physical or mental impairment that (1) is expected to last at least 1 year or result in death and (2) prevents the individual from engaging in substantial gainful activity. Regulations currently define substantial gainful activity as work that produces countable earnings of more than $500 a month for disabled individuals and $960 a month for individuals who are blind. Furthermore, to qualify, an individual not only must be unable to do his or her previous work, but—considering age, education, and work experience—the individual also must be unable to do any other kind of substantial work that exists in the national economy. Although both programs use the same definition of disability, they differ in important ways. Established under title II of the Social Security Act, DI is an insurance program funded by payroll taxes paid by workers and their employers into a Social Security trust fund. Similar to private long-term disability insurance programs, the DI program is for workers who have lost their source of income because of long-term disability. To be insured under DI, an individual must have worked for certain minimum periods with a specified minimum level of earnings in jobs covered by Social Security. Reflecting the program’s long-term disability character, DI benefits generally cannot begin until 5 months after the onset of disability. Medicare coverage is provided to beneficiaries 24 months after entitlement to DI cash benefits commences. By contrast, the SSI program, established under title XVI of the Social Security Act, is not an insurance program and has no prior work requirements. Financed from general tax revenues, SSI is a means-tested income assistance program for disabled, blind, or aged individuals who have low income and limited resources, regardless of work history.Unlike the DI program in which benefits generally cannot begin until 5 months after disability onset, SSI benefits begin immediately upon entitlement. In most cases, SSI entitlement makes an individual eligible for Medicaid benefits. Because the SSI program is a means-tested income assistance program with no work history requirements, many of the beneficiaries it serves may have different characteristics than those served by private sector programs. By definition, individuals qualify for employer-sponsored disability benefits because they were employed at the time they became disabled. They therefore have recent work histories and current job skills when they apply for benefits. In contrast, many SSI applicants have little or no recent work history or current job skills. An SSA study in 1994 found that 42 percent of SSI applicants reported leaving their last job more than 12 months before applying for benefits, and another 27 percent said they did not know when they left their last job. When individuals apply for DI or SSI disability benefits, SSA relies on state Disability Determination Services, agencies that are funded by SSA, to determine the medical eligibility of applicants. If found disabled, the beneficiary receives benefits until he or she dies, converts to Social Security retirement at age 65, or is determined by SSA to be no longer eligible for benefits because of earnings or medical improvement. The law requires SSA to conduct a continuing disability review (CDR) at least once every 3 years to redetermine the eligibility of DI beneficiaries if medical improvement is possible or expected. Otherwise, SSA is required to schedule a CDR at least once every 7 years. SSA’s process for determining disability generally does not directly assess each applicant’s functional capacity to work. The Social Security Act defines disability in terms of the existence of physical or mental impairments that are demonstrable by medically acceptable clinical and laboratory diagnostic techniques. In implementing the act through its regulations, SSA has developed a Listing of Impairments (generally referred to as “the listings”) identifying some medical conditions that are presumed to be sufficient in themselves to preclude individuals from engaging in substantial gainful employment. The presumed link between inability to work and presence of such medical conditions establishes the basis for SSA’s award of disability benefits. According to SSA, the medical conditions identified in the listings serve as proxies for functional evaluations because such impairments are presumed to be severe enough to impose functional restrictions sufficient to preclude any substantial gainful activity. According to SSA data, about 70 percent of new awardees are found to be eligible because their conditions meet or equal listed impairments that serve as proxies for functional assessments of ability to work. Only the remaining 30 percent of new awardees are eligible because they have been further evaluated on the basis of separately developed nonmedical factors, including residual functional capacity, age, education, and vocational skills. Relevant studies, however, indicate that the scientific link between medical condition and work incapacity is weak. While it is reasonable to expect that some medical impairments will completely prevent individuals from engaging in any minimal work activity (for example, those who are quadriplegic with profound mental retardation), it is less clear that some other impairments that qualify individuals for disability benefits completely prevent individuals from engaging in any substantial gainful activity (for example, those who are missing both feet). Moreover, while most medical impairments may have some influence over the extent to which an individual is capable of engaging in gainful activity, other factors—vocational, psychological, economic, environmental, and motivational—are often considered to be more important determinants of work capacity. Beyond the issue of whether SSA’s eligibility determination process adequately assesses work capacity, the process itself diverts the applicant’s attention from the possibility of returning to work. Instead, the process focuses the applicant’s attention on proving that he or she is unable to work. From the moment an individual applies for disability benefits, SSA’s eligibility determination process (which can take from a minimum of several months to 18 months or longer for individuals who initially are denied and appeal) focuses on proving or disproving that the individual meets SSA’s disability definition, not on assessing how the individual could be helped to return to work. The eligibility determination process itself may erode motivation to work. By the time applicants are approved to receive benefits, they have been through a lengthy process that requires them to prove an inability to work; they have testified about their disabilities before program officials and the health care community; family and friends may have helped to demonstrate their work incapacity; and being out of the workforce may have eroded their marketability. These factors are believed to reduce receptivity to any efforts aimed at returning to work. The Social Security Act states that people applying for disability benefits should be promptly referred to state vocational rehabilitation agencies for services to maximize the number of such individuals who could return to productive activity. The Rehabilitation Act of 1973, as amended, authorizes the Department of Education’s vocational rehabilitation program, which provides federal funds to a network of state vocational rehabilitation agencies, to operate the country’s public vocational rehabilitation program. The federal share of funding for these services is about 80 percent; the states pay the balance. Under current procedures, the Disability Determination Service in each state decides whether to refer DI and SSI applicants to state vocational rehabilitation agencies, which in turn decide whether to offer them services such as guidance, counseling, and job placement, as well as therapy and training. In practice, the Disability Determination Services refer, on average, only about 8 percent of DI and SSI beneficiaries to state vocational rehabilitation agencies, and we have estimated that less than 10 percent of those referred actually were accepted as clients. In total, these state agencies have little impact on DI and SSI, successfully rehabilitating only about 1 out of every 1,000 beneficiaries, on average, each year. State vocational rehabilitation agencies may be cautious about accepting DI beneficiaries because SSA does not contribute to the cost of services these agencies provide unless a beneficiary successfully returns to work.For payment purposes, SSA defines success as returning to work for 9 continuous months with earnings at the substantial gainful activity level; whereas, state vocational rehabilitation agencies, on the basis of Rehabilitation Services Administration regulations, define success for all other clients as placing the individual in suitable employment, paid or unpaid, for 60 days. In early 1996, SSA began collecting information on the number of referrals from Disability Determination Services that the state vocational rehabilitation agencies accept. This step is the starting point of the SSA’s implementation of new regulations allowing it to use vocational rehabilitation service providers other than state agencies. Whether beneficiaries receive vocational rehabilitation services when such services would be most effective is also an issue. SSA does not have access to disabled workers until they come to SSA to apply for benefits. SSA survey results indicate that nearly half of DI and SSI applicants with work histories have not worked for more than 6 months immediately before applying to SSA for disability benefits. But even after they apply, vocational rehabilitation services can be delayed for long periods because, generally, SSA does not refer anyone for those services until he or she has been approved as a beneficiary—a process that can take several months and may take 18 months or longer. DI and SSI disability beneficiaries may not view returning to work as an attractive option because, by doing so, they risk losing the security of a guaranteed monthly income and medical coverage. To reduce this risk, the Congress has established incentive provisions to safeguard cash and medical benefits while a beneficiary tries to return to work. However, because of weaknesses in design and implementation, these incentives have not encouraged many beneficiaries to attempt to return to work. The work incentives do not appear sufficient to overcome the prospect of a drop in income for many who face low-wage employment or to allay the fear of losing medical coverage and possibly other federal and state assistance. Private sector businesses underwrite all or part of two primary disability benefit programs for disabled workers: workers’ compensation programs and employer-sponsored disability insurance plans. Growing concerns about the magnitude of disability costs have prompted many in the private sector to turn their attention to developing approaches to manage disability. Advocates of disability management stress the need to develop an integrated approach to manage all types of disability cases, including workers’ compensation and employer-sponsored disability insurance. Workers’ compensation programs are designed to provide medical care and cash benefits to replace lost earnings when workers are injured or become ill in connection with their jobs. Each state has enacted its own workers’ compensation requirements for people employed in that state. As of 1992, workers’ compensation laws covered about 88 percent of the nation’s wage and salary workers. Only in New Hampshire does the state law cover all jobs. Workers’ compensation programs are financed almost exclusively by employers and are based on the principle that the cost of work-related accidents is a business expense. Most states permit employers to carry insurance against work accidents with commercial insurance companies or to qualify as self-insurers by giving proof of financial ability to carry their own risk. States also may impose requirements that affect how employers and insurers manage workers’ compensation cases. For example, some states require that employers and insurers offer specified rehabilitation services, leaving disability managers with no discretion in deciding whether the services are needed. A large majority of compensation cases involve temporary total disability, which means the worker is unable to work while recovering from an injury but is expected to recover fully. When it is determined that the worker is permanently and totally disabled for any type of gainful employment, then permanent total disability benefits are payable. Both temporary and permanent total disability are usually compensated at the same rate, which is usually calculated as a percentage of weekly earnings—most commonly two-thirds of earnings. All programs, however, place dollar maximums on weekly benefits payable. When people receiving workers’ compensation benefits also qualify for DI benefits, SSA generally reduces their DI benefits by the amount of cash benefits they receive under workers’ compensation. But the number of people with reduced DI benefits is relatively small—in 1992, about 103,000 out of about 3.2 million DI beneficiaries had their DI benefits reduced by the amount of their workers’ compensation benefits, according to the National Academy of Social Insurance. While workers’ compensation replaces income lost because of work-related injuries and illnesses, some employers sponsor disability insurance plans that replace income lost because of other injuries and illnesses. These plans can provide short-term or long-term disability coverage or both. Employers who sponsor disability insurance plans either self-insure or use commercial insurers to provide coverage. About 44 percent of all private employees have some type of short-term disability insurance that is provided and paid for, at least in part, by employers, according to National Academy of Social Insurance estimates based on Department of Labor data. Five states—California, Hawaii, New Jersey, New York, and Rhode Island—have mandatory temporary disability insurance programs that are financed by employers and/or employees. These programs typically pay 50 percent of prior pay for 26 to 52 weeks when workers cannot perform regular or customary work because of a physical or mental condition. Employers may purchase sickness and accident insurance from commercial insurers or they may self-insure. Under short-term disability insurance, disability generally is defined as the inability to perform one’s own occupation, and generally benefit payments begin only a few days after the disability begins. Benefits usually last for up to 6 months and typically replace about 50 percent of the worker’s prior earnings. About 25 percent of all private employees have some type of private long-term disability insurance that is paid for, at least in part, by employers, according to National Academy of Social Insurance estimates based on Department of Labor data. Private long-term disability benefits usually do not begin until about 3 to 6 months after the onset of disability, or after short-term disability benefits are exhausted. The benefits usually are designed to replace a specified percentage of predisability earnings—most commonly 60 percent. Although long-term plans may initially pay benefits based on the recipient’s inability to perform his or her own occupation, after 2 years they generally pay benefits only if the individual is unable to perform any occupation. Private employees who have no employer-sponsored long-term disability insurance generally must look to SSA’s DI program as their primary source of disability assistance. Although some individuals may purchase their own individual disability insurance coverage, most individuals rely on the DI program for long-term disability benefits and medical coverage. The DI program is the safety net for people who are unable to work and have no other source of benefits or assistance in returning to work. Almost all private long-term disability insurance benefits are coordinated with DI benefits; that is, private benefits are reduced dollar for dollar by the amount of DI benefits. The rationale for reducing private benefits is to provide an incentive to return to work by paying only the targeted partial replacement of earnings. Also, reducing private benefits dollar for dollar against DI benefits can lower disability insurance premiums. As a result, it is common for private plans to require claimants to apply for DI benefits. The disability programs financed by the social insurance systems in Germany and Sweden employ policies and practices that have been identified by the U.S. private sector and other experts as being key to disability management. Programs in both Germany and Sweden offer an array of services, assistance, and incentives to help people with disabilities remain at or return to work. Germany has a long-standing tradition of emphasizing rehabilitation over granting permanent disability benefits (more commonly referred to as pensions), and Sweden has only recently adopted an emphasis on returning people with disabilities to work. German laws and policies stress the goal of “rehabilitation over pension.” This means that cash benefits are awarded only after it is determined that a person’s earning capacity cannot be restored by rehabilitation or other interventions. Under German social law, rehabilitation is an entitlement for people with physical or mental disabilities and for those threatened by such disabilities. In Germany, disability pensions, rehabilitation, and other forms of return-to-work assistance are provided by a complex system of pension, employment, accident, and health (often referred to as sickness) insurance funds. For people with disabilities that resulted from work-related accidents or occupational diseases, accident insurance finances disability pensions as well as medical and vocational rehabilitation. Although most non-work-related disability pensions are paid by the pension insurance funds, most of the return-to-work assistance provided to people with disabilities is financed by employment insurance. However, to reduce the number of people requiring permanent disability benefits, the pension insurance funds pay for medical and vocational rehabilitation for individuals meeting certain work requirements. For those who have not worked, employment assistance is available from public social assistance and the employment office. All disability pension applicants are considered for rehabilitation and for return to work. Those who are able to work in their former or similar occupations and earn at least half of the average income in that profession are not eligible for any pension, regardless of the disabling condition. If successful rehabilitation seems unlikely, or fails, the pension insurance funds may grant a full or partial pension on either a permanent or temporary basis to a person with reduced earnings capacity caused by a disability. Most disability pensions awarded in Germany are full and permanent. Full or “total disability” pensions are granted to people who can no longer engage in gainful employment. Partial or “occupational disability” pensions may be awarded to people who, for health reasons, can only earn less than half of the amount earned by a healthy person in the same or comparable occupation. A temporary “fixed-term” pension—either full or partial—may be awarded if there are reasonable grounds to believe that the reduced earnings capacity can be remedied within a foreseeable period. The goal of Swedish disability policy is to provide people with disabilities the same opportunity as others for earning a living and participating in community life. Programs for assisting people with disabilities operate within the broader structure of the country’s universal social insurance system—providing protection against sickness, work injury, disability, old age, and unemployment—and its health and employment programs.Social insurance offices in Sweden are responsible for awarding disability benefits (or pensions) and, since 1992, for leading rehabilitation efforts.To facilitate rehabilitation, the social insurance offices have been allocated special funds for purchasing return-to-work services and assistance from either public or private sources. Decision-making in Sweden’s social insurance system starts with the identification of individuals who may need rehabilitation or other forms of employment assistance to return to work. If, however, an individual is deemed unlikely to return to work, or if rehabilitation is unsuccessful, then a disability pension may be granted. Disability pensions are based on reduced work capacity, not the presence of a particular illness or injury. Under Swedish law, permanent or temporary disability pensions can be awarded to individuals between the ages of 16 and 64 and who because of illness or other reductions in physical or mental performance cannot support themselves by employment. If work capacity is permanently reduced by at least 25 percent, Swedish nationals may receive a basic disability pension, regardless of work history. Full, three-quarters, half, or one-quarter basic pensions may be granted to individuals with disabilities, depending upon the extent to which work capacity is reduced. In addition to a basic pension, an individual with a work history may also receive a supplementary pension based on employment time and earnings. Sweden also grants temporary disability pensions if the reduction in work capacity is not considered permanent. A variety of other cash benefits may also be awarded in Sweden. Sickness benefits may be paid indefinitely to individuals with reduced work capacity. Pension supplements are available to those receiving only the basic pension or who have a low supplementary pension. Disability allowances provide compensation for extra costs that people incur from their disabilities. And rehabilitation allowances cover loss of earnings and certain kinds of expenditures for people participating in vocational rehabilitation. The Chairman of the Senate Special Committee on Aging asked us to report on ways to improve SSA’s return-to-work efforts. To develop this information, we (1) identified key practices used by U.S. private sector companies to return disabled workers to the workplace and (2) obtained examples of how other countries’ social insurance programs approach returning people with disabilities to work (discussed in chs. 2, 3, and 4). To develop the information on the private sector in this report, we interviewed officials of selected employers, insurers, and other organizations known for their leadership in disability management (see app. I). We reviewed documents they provided, and we also performed an extensive review of literature on disability management. In addition, through a mail survey, we obtained the views of 21 disability managers from companies or other organizations that are leaders in developing disability management programs. As a group, these 21 individuals represented extensive experience in managing disability under workers’ compensation and disability insurance programs. Of the 21 individuals, 8 had managed only disability insurance cases; 4 had managed only workers’ compensation cases; and 9 had managed both. We did not verify that the information reported in the responses to our survey was factually accurate, but we conducted extensive interviews with five respondents to supplement the survey responses. Our survey instrument presented the respondents with a list of disability management practices and asked whether their current programs incorporated each practice. We then instructed them to assume they were designing a model disability management program and asked them to assess how important they believed each practice would be in their model program. We asked them to assess the importance of each practice on a scale of 1 to 5, with 1 equaling “not important” and 5 equaling “very important,” regardless of whether their current programs incorporated that practice. Appendix II presents the survey instrument as well as data on how many respondents said their companies had incorporated each disability management practice in their programs. It also shows the mean rating of the importance that the respondents placed on including each practice in a model disability management program. The results of our survey represent the views of the disability managers who responded and should not be considered necessarily representative of the views of other disability managers. However, as we intended, the results illustrate what “leading edge” companies believe is important. In addition, we obtained comments from disability managers of 15 companies on a summary of our analysis of private sector return-to-work practices. We asked them to assess the accuracy, completeness, objectivity, and soundness of our analysis. In general, they agreed with all aspects of our analysis, and we made only minor technical changes to this information based on their comments. A bibliography of the literature we used in our analysis of private sector disability management and a list of related GAO products are at the end of this report. While many in the private sector believe that their proactive return-to-work efforts have resulted in net dollar savings, there have been no rigorous studies that present conclusive data on the cost-effectiveness of disability management, particularly with respect to the extent to which specific components of return-to-work programs may be responsible for cost savings. To obtain examples of how other countries’ social insurance programs approach returning people with disabilities to work, we did an extensive review of the literature on disability programs in other countries. To develop further information on return-to-work approaches in other countries, we interviewed a number of program officials and other experts on disability programs in Germany and Sweden, and reviewed the documents they provided. For each country, we obtained information on (1) program goals, benefits, and incentives; (2) early intervention efforts; (3) the type of return-to-work measures and services offered as well as how the assistance is provided and funded; (4) the eligibility decision-making process; and (5) how cases are managed when return-to-work services are provided. Appendix III lists the people we interviewed in Germany and Sweden. We selected disability programs in Germany and Sweden for review because (1) both countries have political structures and standards of living, including the use of technology, similar to those in the United States, and (2) their disability programs have policies and practices that have been identified by the U.S. private sector and other experts as being key to disability management: early intervention and an emphasis on return to work through the provision and management of services, incentives, and rehabilitation. As with disability management programs in the U.S. private sector, social insurance programs in Germany and Sweden spend money on return-to-work efforts to reduce disability costs. However, in general, rigorous studies demonstrating the cost-effectiveness of programs in Germany and Sweden do not exist. Where appropriate, we discuss the few studies that have examined outcomes of certain practices. We did not independently verify the accuracy of the data used in this report. Except for this, our work was performed in accordance with generally accepted government auditing standards between February 1995 and March 1996. Respondents to our private sector survey generally indicated they believe that early intervention is of paramount importance in returning disabled workers to the workplace. Early intervention involves the initiation of stay-at-work or return-to-work efforts as soon as possible after a disabling, or potentially disabling, event occurs. The respondents to our survey stressed the importance of several early intervention practices in their return-to-work programs (see table 2.1). Disability management literature supports the respondents’ focus on early intervention, emphasizing that the longer an individual remains away from work because of a disabling condition, the less likely it is that the individual will ever return to work. One study emphasized that the timing of intervention is not a question of months, but of days or even hours after a disabling event occurs. The literature emphasizes that disability cannot be explained solely by a person’s medical condition and that the decision to return to work depends greatly on the disabled worker’s personal motivation. In this view, long absences from the workplace because of disability can lead to a disability mind-set—a condition of discouragement in which disabled workers, believing they will not be able to return to work, lose the motivation to try. Studies have shown that only one in two newly disabled workers who remain out on disability 5 months or more will ever return to work. According to one study, a key to disability management success is the immediate creation, or maintenance, of the expectation that an individual has the potential to work and will return to work. Of the 21 respondents to our private sector survey, 18 stated they address return-to-work goals from the beginning of an emerging disability. When we asked the respondents to rate the importance of including this practice in a model disability management program, they gave goal-setting a high mean rating of 4.7 (on a scale of 1 to 5, with 1 equaling “not important” and 5 equaling “very important”). By contrast, return-to-work goals for SSA’s disability beneficiaries are not addressed, if at all, until the eligibility determination process is completed, which takes a minimum of several months and can take 18 months or longer for individuals who are initially denied benefits and appeal. Addressing return-to-work goals early requires that injuries and illnesses be reported quickly to disability managers. One workers’ compensation program manager, for example, told us that her company encourages reporting of injuries and illnesses within 24 hours. To encourage such prompt reporting, one of the company’s divisions has a policy of not charging any disability expenses to the manager’s profit and loss center if the injury or illness is reported within 24 hours. Another company instructs employees to report claims for all absences of more than 7 days to the company’s disability management team. We were told that a team then begins the process of developing a return-to-work plan in consultation with the employee and his or her treating physician rather than waiting until the employee is regarded as disabled. Some respondents said they use disability duration guidelines as a tool for evaluating the expected length of an individual’s absence from work because of illness or injury. Such guidelines commonly are commercially produced compilations of medical data on the characteristic duration of different types of disabilities according to diagnoses, symptoms, and occupational factors. For employers or insurers with large databases, duration guidelines can reflect actual experience in combination with medical and vocational research. The employer or insurer can use this information to work with the disabled individual and his or her physician to set a target date for return to work. In Germany and Sweden, laws and policies require that an individual’s return-to-work potential be assessed soon after the onset of a disabling condition. Consequently, people with disabilities are generally considered for rehabilitation and return to work at relatively early stages in their contacts with the social insurance offices. In Germany, the health insurance funds generally inquire about the appropriateness of rehabilitation for individuals drawing sickness benefits more than 10 weeks. In addition, vocational counselors often discuss rehabilitation and return-to-work plans with work accident or occupational illness victims while they are still in the hospital. And everyone applying for a disability pension in Germany is considered by the pension insurance funds for rehabilitation and return to work before being determined eligible for permanent benefits. Under Swedish laws and policies, both the private and public sectors are responsible for the early identification of candidates for rehabilitation and return to work. Since 1992, employers have been responsible for investigating whether employees who receive sickness benefits for 4 weeks or who are absent from work frequently because of illness need some type of rehabilitation. Employers are also responsible for arranging for a rehabilitation examination and reporting this to the social insurance office. When employers disregard their responsibilities, Sweden’s social insurance offices arrange for the examination and start planning rehabilitation for the disabled workers. Because the social insurance offices monitor sickness benefits, they are able to identify who may need rehabilitation or other forms of employment assistance. After someone has received sickness benefits for about 4 weeks (28 to 30 days), a social insurance office begins the process of assessing whether the person will need vocational rehabilitation to return to work. Consistent with the early intervention emphasis, most respondents to our survey stated they believe it is important to provide rehabilitation services from the onset of disability. Such services, which are intended to restore an individual’s health, functional capacities, or ability to engage in useful and constructive activity, fall into two basic categories: medical and vocational. Medical rehabilitation involves physical and mental care services, while vocational rehabilitation includes services such as vocational assessment, labor market surveys, developing alternative work plans, retraining, and assistance with job-seeking skills. Vocational rehabilitation focuses primarily on helping individuals with disabilities enter a different job or career. The respondents to our survey tended to view medical rehabilitation as having more priority than vocational rehabilitation during the early stages of a disability. Of the 21 respondents, 18 said they provide medical rehabilitation services from the onset of disabilities, but only 12 said they provide vocational services from onset. Similarly, in rating the importance of rehabilitation services in a model disability management program, the respondents’ mean rating for providing medical services from onset was 4.3, compared with a mean rating of 3.7 for providing vocational services from onset. The respondents’ preference for medical before vocational rehabilitation services in the early stages of disability is not surprising. All 21 respondents to our survey said that their initial goal is to return the worker to the same job he or she was doing before the disabling event. During follow-up interviews, several respondents stated that workers who have potential to return to their old jobs generally need only medical services to go back to work, but it is important that these medical services be provided as early as possible. When it appears the worker will be unable to return to the same job, disability managers turn to vocational services, which focus more on assisting the disabled employee to enter a different job or career. Most individuals who apply to SSA for disability benefits are not working, but SSA’s focus is not on returning them to work. The agency’s efforts instead focus on determining their eligibility for cash benefits. Assessment for vocational rehabilitation services to enable return to work occurs, if at all, after the eligibility determination process is completed, which, as mentioned before, sometimes takes 18 months or longer. In Germany and Sweden, laws and policies emphasize providing return-to-work services and assistance at the earliest appropriate time. Similar to the private sector in this country, a guiding principle of Germany’s social insurance system is that intervention should occur at the earliest possible stage of disability to minimize the degree and effects of the disability. Intervention often begins when the treating physician, one of the insurance agencies, or the employer urges a person receiving sickness benefits to apply for medical rehabilitation. Ability and capacity to work are assessed at this time. Following medical rehabilitation, in cases where it is warranted, the person will be referred to vocational rehabilitation or other types of return-to-work services and assistance.Disability pensions are not awarded until it has been determined that the person’s earning capacity cannot be restored through rehabilitation. In Sweden, as mentioned before, employers are responsible for the early identification of workers who need rehabilitation and for taking early intervention steps. Employers often fail to do this, however, and the social insurance offices, which closely monitor the use of sickness benefits, intervene. After someone has received sickness benefits for about 4 weeks, the social insurance office collects information from the person’s doctor or employer to determine whether vocational rehabilitation will be needed for return to work. The goal of the social insurance office is to make this decision within the next 2 weeks. If such assistance is warranted, the social insurance office may purchase vocational rehabilitation and related employment services. If after receiving such services, the person does not return to work and still has the disabling condition, he or she can continue to receive sickness benefits. After 12 to 13 months of receiving these benefits, a decision is made to grant the person either a permanent disability pension or a temporary pension and possibly more vocational services. An official at the National Social Insurance Board in Sweden has concluded early intervention pays for itself. His study found that early screening and contact with clients and employers, greater attention to diagnoses, and close cooperation among the social insurance offices and the medical and vocational rehabilitation providers reduced social insurance costs by returning people to the workplace sooner. The study noted that the reduction in sick leave and the probable accompanying increase in days worked was more than sufficient to pay for the increased administrative costs. This same official told us that just by intervening with phone calls at the 14th day of someone receiving sickness benefits saves the social insurance offices money. To help maintain motivation to return to work, respondents to our survey indicated they believe it is important to establish early contact and to stay in touch with disabled workers. Of the 21 respondents to our survey, 19 stated they maintained communication with workers who are hospitalized or recovering at home. When asked to rate the importance of including this type of communication in a model disability management program, the respondents gave it a mean rating of 4.7. Contacting a worker soon after an injury or illness and then continuing to communicate with the worker is important because the worker needs to be reassured there is a job to return to and that the employer is concerned about his or her recovery. Such reassurances can help maintain motivation to return to work. One disability manager stated that her company contacts workers within 24 hours of a reported illness or injury and recontacts them every 2 weeks by telephone. Another stated her company’s case managers are required to contact workers at least once a week. The person responsible for maintaining communication varied from company to company. One respondent said in her company a registered nurse case manager contacts hospitalized workers before they return home, and the case manager maintains contact until the disabled worker returns to full duty. She said the first week after an injury is a window of opportunity that is critical to minimizing a worker’s time lost from work. In other instances, one company uses a disability management vendor to maintain contact, and another stresses that the worker’s supervisor maintain contact. Depending on whether a company is self-insured or insured by a commercial carrier, contacts with disabled workers may also be maintained by insurance company personnel. By contrast, SSA’s contacts with disability applicants are limited to efforts to obtain the evidence needed to determine eligibility for cash benefits. Rather than encourage the applicant to return to work, these contacts probably serve only to strengthen the applicant’s resolve to prove he or she is disabled. In both Germany and Sweden, insurance offices contact individuals receiving sickness benefits to determine whether they will be able to return to work without intervention or whether they will need some type of assistance to do so. As mentioned, workers in Germany who draw sickness benefits longer than 10 weeks are generally contacted by the health insurance funds or their employer to inquire about the appropriateness of rehabilitation measures. In Sweden, social insurance offices telephone workers after they have received sickness benefits for 14 days to determine what, if anything, needs to be done to get them back to work. Not only must rehabilitation services be provided at the earliest appropriate time, but disability managers need to ensure that the services are appropriate for each individual. The respondents to our survey generally told us they attempt to provide return-to-work assistance that is tailored to the individual and that they manage disability cases with a view toward achieving return-to-work goals. This approach seeks to avoid unnecessary expenditures while investing in cost-effective techniques for achieving return-to-work goals for disabled workers. Respondents to our survey told us they employ several key practices in identifying and providing appropriate services and managing their return-to-work programs (see table 3.1). Of the 21 respondents to our survey, 20 stated that they assess return-to-work potential early in the process. As some respondents emphasized, return-to-work potential is not determined merely by a medical diagnosis showing the presence of an impairment but, rather, by functionally evaluating each individual’s capacity to work after his or her medical condition has stabilized. When we asked the respondents to rate the importance of including early assessment of return-to-work potential in a model disability management program, they gave it a mean rating of 4.8 on a scale of 1 to 5. By contrast, SSA’s process for determining disability generally does not directly assess each applicant’s functional capacity to work. Instead, as mentioned before, SSA’s evaluation process presumes that certain medical conditions are in themselves sufficient to preclude work. SSA enumerates such medical conditions in its Listing of Impairments. These listings serve as proxies for functional evaluations, identifying impairments that are presumed to impose functional restrictions sufficient to preclude any gainful activity. About 70 percent of new awardees are eligible because their conditions meet or equal listed impairments that are presumed to be disabling. Only the remaining 30 percent of new awardees are eligible because they have been further evaluated on the basis of separately developed nonmedical factors, including residual functional capacity, age, education, and vocational skills. Fifteen of the 21 respondents to our survey also stated their return-to-work programs attempt to provide services at the earliest appropriate time. In rating the importance of including vocational services in a model disability management program, the respondents gave this practice a mean rating of 4.4. However, 12 respondents said that as part of their effort to provide appropriate services, they provide these services only to individuals who are deemed likely to return to work. The motivation for this approach is to avoid investing funds in vocational services when the risk is high that a disabled worker will not return to work even after receiving vocational services. Some companies have begun developing profiles of characteristics that help them identify the disabled workers who are most likely to benefit from vocational rehabilitation services and return to work. For example, two insurers we contacted had studied thousands of long-term disability cases and developed profiles that include, among other factors, age, gender, marital status, whether the disability was caused by accident or illness, whether the disability occurred on the job, and type of disability. Using such a profile, one insurer categorizes each long-term disabled worker in one of three groups: (1) those who are unlikely to return to work regardless of whether they receive vocational rehabilitation services, (2) those who are likely to return to work but do not need rehabilitation services to do so, and (3) those who are likely to return to work but need rehabilitation services to do so. The company focuses its attention on individuals in the third group because they have the greatest potential for cost-effective use of rehabilitation resources. This approach results in a relatively small proportion of beneficiaries receiving rehabilitation services. Officials of insurance companies we contacted estimated that about 3 to 7 percent of their long-term disability beneficiaries receive vocational rehabilitation services. These companies expect to save more than they spend on their investment in rehabilitation services. For example, one insurance company reported that for every dollar spent on rehabilitation, it had saved an average of $10 in long-term disability reserves and expected the savings ratio to increase as the company gained experience in identifying the people most likely to benefit from rehabilitation services. Another insurance company reported average savings of $35 in long-term disability reserves for every dollar spent on rehabilitation services. In Germany and Sweden, return-to-work services and assistance are fairly extensive and tailored to meet individual needs. An individual may receive a combination of different benefits and services, such as medical or vocational rehabilitation, employment or social assistance, as well as cash assistance while applying for or participating in rehabilitation. As noted in chapter 1, rehabilitation is an entitlement in Germany. Vocational assistance measures include assistance in retaining or obtaining a job (including grants to the employer); assistance in selecting an occupation (including work trials or sheltered workshops); basic training and retraining to prepare for an occupation (including basic education necessary to attend more advanced training courses); workplace adaptations; and wage subsidies for employees who are difficult to place. The duration of vocational assistance varies greatly and can last as long as 2 years for basic training or retraining programs. The person’s aptitude, inclinations, and former occupations are taken into account as well as labor market conditions when accepting an individual into a vocational retraining program. Providing appropriate return-to-work assistance to people with disabilities is viewed as a cost-effective investment in Germany. Officials we interviewed noted that placement rates for individuals who completed vocational retraining have been fairly high, although there are no quantitative data documenting overall cost-effectiveness. Surveys in Germany have found that about 80 percent of former trainees were working one year after completing their vocational retraining, and these results have remained steady over a number of years for a wide range of occupations. However, some retraining centers have waiting lists in certain vocational areas. For example, we were told that a Frankfurt retraining center had a 1- to 2-year waiting list for those to be retrained as office workers. Swedish laws and policies that address people with disabilities as well as the country’s generous package of noncash benefits and services are aimed at helping individuals remain at or go back to work. To make the workplace accessible, employers by law must adapt working conditions, including the organization of work, to suit the needs of those with functional impairments. Government subsidies may be disbursed to employers who adapt their workplaces to the special needs of a person with a functional disability, install technical aids, or engage a personal assistant for a worker with a disability. In addition, under a law that took effect January 1, 1994, people who have severe functional disabilities and who need help with certain daily activities are entitled to personal assistance. In Sweden, people with disabilities have, like others, the right to assistance from the regular employment office in finding employment. Employment assistance measures include assessment of working capacity, occupational rehabilitation, vocational guidance, subsidized employment, sheltered employment, on-the-job training, and probationary employment at companies that agree to such arrangements. Rehabilitation is not meant to be a lengthy process, but rather a short, intensive period of medical, social, and work-related training to help the individual to return to work as soon as possible. All but one of the 21 respondents to our survey said they offer transitional work opportunities to help disabled workers ease back into the workplace. Transitional work (also known as modified work or light duty) involves changing the work environment to allow an employee who has been disabled to return to work at a job that is less physically or mentally demanding than his or her previous assignment. When asked to rate the importance of including transitional work opportunities in a model disability management program, the respondents gave it a mean rating of 4.8. Workplace modifications that provide transitional work opportunities may include job restructuring, assistive devices, workstation modifications, reduced hours, or reassignment to another job. For example, one respondent said that reducing the worker’s hours is typically her company’s first approach. Another said that in her company’s restaurant operations, employees are cross-trained so they can exchange positions or shift tasks if one of them, for instance, is experiencing back problems. The Americans With Disabilities Act (ADA) requires an employer with 15 or more employees to make “reasonable accommodations” for the known disability of an applicant or employee unless doing so would impose an “undue hardship” on the employer. A reasonable accommodation could include reassigning an employee to another job. Three insurance companies stated that although not obligated to do so under ADA, they had paid for workplace modifications for disabled beneficiaries formerly employed by firms that provided disability coverage through these insurance companies. The insurance companies viewed these expenditures as cost-effective investments because benefit payments to these beneficiaries were reduced or eliminated after the beneficiaries returned to work. One of these insurance companies often contracts to spend up to $2,000 on workplace modifications on behalf of a disabled beneficiary. In some circumstances, however, the company has spent more than $2,000 on modifications to help an individual return to work. By contrast, SSA does not promote the provision of job accommodations that could enable an individual to return to work. In both Germany and Sweden, transitional work opportunities may be arranged for people with disabilities. Such transitional work may be considered for people with disabilities who can return to work part-time and gradually increase their daily work hours until they reach their maximum work capacity. In Germany, such a gradual return to the original job is a formalized process known as stepwise reintegration, and it is implemented under the guidance of the treating physician and the company’s doctor. In Sweden, transitional opportunities include the adaptation of working conditions to suit the needs of people with functional impairments, trial work, on-the-job training, and part-time work leading to full-time work. Most respondents to our survey (20 of 21) said they use disability case management techniques, when appropriate, to help disabled workers return to work. When asked to rate the importance of including case management in a model disability management program, respondents to our questionnaire gave it a mean rating of 4.5. By contrast, under current procedures, SSA does not assess which cases may warrant case management. Although disability case management may be defined and implemented differently by different companies, it generally can encompass identifying, evaluating, and coordinating the delivery of return-to-work services, including social, health care, and rehabilitation services. The case manager may do such things as help the individual understand or obtain transitional work opportunities or assist in talking with the individual’s doctor about treatment and recovery. Although most respondents believe case management is important, they have implemented it in different ways. For example, some respondents employ their own staff of case managers, but others rely on the staffs of their disability insurers or third-party administrators. Furthermore, respondents differed in how they assign case managers. One self-insured employer, for example, assigns someone from its disability management team to act as case manager on every disability case, regardless of whether the case involves workers’ compensation or short-term or long-term disability insurance. But in another instance, a disability insurance company determines on a case-by-case basis whether the case is complex enough to warrant a case manager. Disability managers we contacted told us their case managers typically have caseloads of no more than 50 disabled workers. When workers are determined to have rehabilitation potential, case managers continue to manage their cases for extended periods, for example, up to 2 years. In Germany, two national officials we interviewed stated that the accident insurance program (similar to workers’ compensation in the United States) is viewed as being more effective than the pension insurance office in returning people with disabilities to work. The program is more successful, in part, because it assigns individual advisers (or case managers) soon after the onset of a disabling condition. Almost all respondents to our survey (19 of 21) said they attempt to ensure that medical providers understand the disabled worker’s essential job functions because the treating physician’s decision to release the worker affects the timing of the worker’s return to the workplace. When asked to rate the importance of this practice in a model disability management program, the respondents gave it a mean rating of 4.6. By contrast, SSA generally contacts treating physicians only to request medical information needed to determine whether applicants meet disability eligibility criteria. In the view of private sector disability managers, it is important not only that the physician understand the disabled worker’s essential job functions, but also that the physician understand the impact of any transitional work opportunities or other job accommodations that the employer is willing to provide. Otherwise, the physician may not release the individual to return to work until he or she can function at predisability levels. As some disability managers told us, actions taken to ensure that medical providers understand the essential job functions and focus on return-to-work issues should be viewed as part of the early intervention strategy. At one of the respondents’ companies, for example, a supervisor accompanied employees with occupational injuries on the first visit to a physician. And at some respondents’ organizations, case managers communicate with treating physicians to make sure the physicians understand the essential job functions of disabled workers. Others said they try to direct disabled workers to physicians who are familiar with their companies’ operations. Several respondents said their companies sometimes provide treating physicians with videotapes of the actual job functions that would be expected of disabled workers. Also, to provide physicians with general familiarity about the jobs performed by workers, two respondents said their companies take physicians on tours of company facilities. Some disability managers told us they have concerns about the degree to which the medical community focuses on return-to-work issues. They believe physicians should proactively address the question of return to work with injured and ill workers. However, in their view, medical training in the United States does not sufficiently emphasize the desirability of disabled workers’ returning to work at the earliest appropriate time. As a result, these disability managers believe physicians generally give insufficient priority to return-to-work issues. Most respondents to our survey believed that return-to-work efforts are enhanced by organized systems of care. An organized system of care gives companies greater opportunity to educate physicians in the requirements of jobs performed by the companies’ workers. As well as focusing on care, health care providers in an organized system of care can collaborate with employers on setting return-to-work expectations for members who become disabled. Of the 21 respondents, only 8 said they currently use an organized system of care as part of the strategy for returning disabled workers to the workplace. However, when asked to rate the importance of including an organized system of care in a model disability management program, 16 of the 21 respondents gave it a rating of 4 or 5. In Germany, physician education plays an important role in the rehabilitation and return-to-work process. The Federal Rehabilitation Council issues guidelines for doctors to follow during the rehabilitation process. Among other things, the guidelines describe the duties of the doctor while his or her patient is undergoing rehabilitation (medical and vocational) and they inform the doctor about the various rehabilitation centers and specialized equipment that is available. Moreover, the guidelines stress the importance of working closely with employment office officials so that a disabled worker may keep a job or find a new one, depending on the person’s residual functional capacities. Respondents to our survey generally told us they believe it is important that the cash and medical benefits structure provide incentives for disabled workers to return to work. However, as some respondents emphasized, such work incentives by themselves are not sufficient to make a return-to-work program successful. Incentives must be part of an integrated strategy that includes effective early intervention and the identification, provision, and management of return-to-work services. The respondents to our survey indicated several key practices in providing work incentives (see table 4.1). As we reported recently, work incentives available to DI and SSI beneficiaries do not appear sufficient to make returning to work an attractive option. By returning to work, they risk losing the security of a guaranteed monthly income and medical coverage. Weaknesses in the design and implementation of the work incentives have made these provisions ineffective in overcoming the prospect of a drop in income for many who face low-wage employment or to allay the fear of losing medical coverage. When asked to rate how important it would be for a model program to include a cash benefit structure that encourages return-to-work, the respondents gave this practice a relatively high mean rating of 4.4; however, only 14 of the 21 respondents said that their current cash benefits structure actually provides an incentive to return to work. The following are examples of how some respondents’ companies structure cash benefits to make returning to work more financially attractive than remaining away from work: While away from work, the disabled worker receives disability benefits equivalent to 60 percent of predisability earnings. If the individual returns to work, his or her earnings are supplemented by an incentive benefit amount so that total income can be considerably higher than the disability benefits the worker was receiving. The worker continues to receive an incentive benefit until his or her earnings reach 80 percent of predisability earnings. If a disabled worker returns to work, he or she continues to receive unreduced disability benefits for 1 year, unless the total of earnings and benefits would be greater than the individual’s predisability earnings. After 1 year, the worker continues to receive disability benefits, but these benefits are reduced by an amount equal to 70 percent of the worker’s earnings. Disabled workers are allowed a trial work period, usually 6 months, during which long-term disability benefits can be reinstated without reapplication if the worker cannot remain at work. If a disabled worker returns to work, he or she can receive up to $350 per month for each family member to cover family care expenses. Under certain conditions, an insurance company will reimburse a claimant for moving expenses incurred in relocating to take a job. As mentioned, the respondents indicated they believe it is highly important to structure cash benefits to provide an incentive to return to work; however, we noted that their mean rating for this practice was slightly lower than the mean ratings they gave to other return-to-work practices they considered important, such as maintaining communication, setting return-to-work expectations as early as possible, ensuring that medical service providers understand essential job functions, and providing transitional work opportunities. This highlights, as some respondents commented, that although financial incentives are important, a successful return-to-work program must effectively integrate financial incentives with other important practices. Disability management literature supports the view that the cash benefits structure can affect the disabled worker’s attitude toward returning to productive activity in the workplace. Short-term disability insurance generally replaces 40 to 70 percent of earnings for periods ranging from 30 days to 6 months; whereas, long-term disability insurance usually replaces about 60 percent of prior earnings, with maximum limits on monthly benefits, for periods that can extend to retirement or longer. Studies show that if disability benefits are too generous, the benefits can create a disincentive for participating in return-to-work efforts. For example, studies of workers’ compensation programs have concluded that the larger the percentage of original wages that is paid to disabled workers, the more difficult it is to bring them back to work. In Germany, we found that the social insurance programs offer financial incentives to encourage individuals with disabilities to participate in rehabilitation programs and return to work. As mentioned before, individuals who are considered good candidates for rehabilitation are not awarded disability pensions. Instead, to encourage participation in rehabilitation, they can receive a cash benefit that is greater than unemployment or welfare allowances. Depending on individual circumstances, expenses for room and board, household assistance, travel, and other expenses incurred while undergoing medical or vocational rehabilitation may also be covered. However, one official we interviewed stated that economic incentives are limited. In his view, the key to encouraging return to work is the individual’s motivation and positive perspective, and the disability program’s processes must be designed to reinforce that motivation. Germany’s process is designed to identify individuals who are good candidates for rehabilitation before they are awarded disability pensions. In Sweden, individuals with return-to-work potential may be awarded only a temporary disability pension. This time-limited benefit is awarded if the individual’s reduced work capacity is not considered permanent but is expected to continue for a significant period (as a rule, a minimum of 1 year). To encourage such individuals to participate in vocational rehabilitation, Sweden provides a rehabilitation allowance, which includes a benefit to cover loss of earnings, and a special grant to cover certain kinds of expenses connected with rehabilitation. Because Sweden’s permanent disability pensions replace a high proportion of income, some workers may consider it more attractive to avoid rehabilitation and try to obtain a permanent pension. Currently, permanent disability pensions replace 65 to 70 percent of income for individuals who receive both a basic and a supplementary pension on the basis of having a work history. Supplemental, collective bargaining agreements add another 10 to 20 percent to the earnings replacement. Discussions of SSA’s return-to-work efforts often emphasize that beneficiaries are reluctant to return to work because they fear losing their premium-free Medicare or Medicaid benefits. By contrast, in the private sector, medical benefits provide an incentive to return to work because it is by returning to work that disabled workers can be most assured of retaining these benefits. Respondents to our survey, when asked to rate the importance of including continuation of medical benefits in a model disability management program, gave this practice a mean rating of 4.1. In the private sector, disabled workers jeopardize their medical benefits by remaining away from work because employers eventually may terminate their employment. If terminated, such individuals may no longer be enrolled in the employer-sponsored health plan. If they later go back to work with a new employer, the new employer may not offer employer-sponsored medical benefits, or the employee may be excluded from coverage because of preexisting conditions. These possibilities give a disabled worker an incentive to return to a job with his or her old employer. In contrast, in the DI and SSI programs, beneficiaries face the loss of premium-free Medicare or Medicaid benefits if they return to work, and moreover, the job they get may not offer medical benefits or may not provide coverage because of preexisting conditions. This discourages DI and SSI beneficiaries from returning to the workplace. DI beneficiaries who return to work can receive premium-free Medicare benefits for 39 months following a trial work period; however, to retain coverage thereafter, they must pay the same monthly cost as uninsured retired beneficiaries. SSI beneficiaries can continue receiving Medicaid coverage after their earnings become too high to allow a cash benefit, but coverage ends when their earnings reach a higher threshold amount that varies from state to state. For example, the threshold amount in 1994 was $17,480 in Pennsylvania and $22,268 in California. In Germany and Sweden, loss or retention of health care insurance is not an issue in a worker’s decision on whether to participate in rehabilitation or attempt returning to work. The individual will continue to belong to the compulsory insurance system that provides sickness and disability protection. Only 12 of the 21 respondents to our survey said their organizations have contractual provisions that can require disabled employees to cooperate in return-to-work efforts as a condition of eligibility for disability insurance benefits. When asked to rate the importance of including this requirement in a model disability management program, however, the respondents gave it a mean rating of 4.1. This relatively high rating is consistent with one study that found that return-to-work efforts cannot be nurtured in an environment in which, among other things, participation in a vocational rehabilitation program is entirely voluntary. Some respondents stated that the ability to require cooperation as a condition of eligibility for benefits is important because it can help motivate an individual with a disability to try to return to work. At the same time, however, some respondents cautioned that such a requirement must be invoked carefully because a company could spend money on return-to-work efforts for individuals who participate because they feel compelled but ultimately do not return to work because of a basic lack of motivation. The Social Security Act provides for withholding benefits if a beneficiary refuses without good cause to accept rehabilitation services. In Germany and Sweden, individuals may also be denied benefits for not participating in or cooperating with rehabilitation when it is recommended by one of the insurance offices. For example, the pension insurance funds in Germany can deny an individual rehabilitation benefits or a disability pension if they do not participate in or sufficiently cooperate with the recommended rehabilitation program. Similarly, if someone refuses to participate in training because that person would rather receive an unemployment benefit than undergo rehabilitation, the employment office can stop his or her benefits. The social insurance offices in Sweden may also revoke benefits, including pension benefits, for those who refuse to participate in vocational rehabilitation. We do not have information on the extent to which these provisions are actually invoked in Germany and Sweden. Disability managers we surveyed spend money on return-to-work efforts because they believe such efforts are good investments that reduce disability-related costs. Social insurance programs in Germany and Sweden also spend money on return-to-work efforts to reduce disability costs, and their goals stress the importance of work in integrating people with disabilities into the broader social community. Improving the success of SSA’s return-to-work efforts offers great potential for reducing federal disability program costs while helping people with disabilities return to productive activity in the workplace. If an additional 1 percent of the 6.3 million DI and SSI working-age beneficiaries were to leave the disability rolls by returning to work, lifetime cash benefits would be reduced by an estimated $2.9 billion. With such large potential savings, return-to-work services could be viewed as investments rather than as program outlays. In our current study of return-to-work practices, we identified three basic strategies employed in the U.S. private sector as well as in social insurance programs in Germany and Sweden. These strategies, which must be integrated to form a comprehensive return-to-work program, are as follows: Provide services and assistance sooner rather than later to promote and facilitate return to work. Identify and provide necessary return-to-work assistance and manage cases to achieve goals. Structure cash and medical benefits to encourage return to work. Lessons from the private sector and other countries’ social insurance programs argue for SSA placing greater priority on assessing return-to-work potential soon after individuals come to SSA and apply for disability benefits. Currently, when an individual comes to SSA and applies for DI or SSI benefits, SSA’s priority is to determine eligibility for cash benefits. The need for medical and vocational rehabilitation is not addressed until after applicants have been approved to receive cash benefits, which can take up to 18 months or longer from the time an application is filed. In conjunction with making an early assessment of return-to-work potential, SSA needs to place greater priority on identifying and providing, at the earliest appropriate time, the medical and vocational rehabilitation services needed to return to work. Currently, SSA bases 70 percent of its awards on whether an individual’s medical symptoms, signs, and diagnostic results match SSA’s Listing of Impairments that are presumed to prevent work. It does not evaluate whether these people could return to work if given appropriate assistance. To improve return-to-work outcomes and to identify the services needed, SSA needs to place greater emphasis on functionally evaluating work capacity. Under the current legislative design, SSA provides vocational rehabilitation services too late in the process. In addition, neither DI nor SSI applicants are eligible for medical rehabilitation benefits under Medicare or Medicaid, respectively, until they are approved for cash benefits through the lengthy eligibility determination process. And, in the DI program, the provision of medical rehabilitation is further delayed because Medicare eligibility does not begin until 24 months after applicants are approved to receive cash benefits. Finally, cash and medical benefits need to encourage beneficiaries to return to work. The current design of cash and medical benefits in the DI and SSI programs often presents more hindrances than incentives when beneficiaries consider returning to work. The structure of cash benefits can make it financially advantageous to remain on the disability rolls, and studies report that DI and SSI beneficiaries fear losing their premium-free Medicare or Medicaid benefits if they return to work. The experiences of the social insurance programs in Germany and Sweden show that the utility of return-to-work strategies is not confined to the private sector. Although SSA faces constraints in applying these strategies, we believe steps should be taken earlier to better identify and provide appropriate return-to-work assistance to those who could return to work. Even relatively small gains in return-to-work successes offer the potential for significant savings in program outlays. Our recent report, SSA Disability: Program Redesign Necessary to Encourage Return to Work, recommended that the Commissioner of SSA place greater priority on return to work, including designing a more effective means to identify and expand beneficiaries’ work capacities and better implementation of existing return-to-work mechanisms. In line with placing greater emphasis on return to work, we recommend that the Commissioner develop a comprehensive return-to-work strategy that integrates, as appropriate, earlier intervention, earlier identification and provision of necessary return-to-work assistance for applicants and beneficiaries, and changes in the structure of cash and medical benefits. The Commissioner should also identify legislative changes needed to implement such a program. In commenting on a draft of this report, SSA agreed much can be learned from the return-to-work practices of the U.S. private sector and disability programs in Germany and Sweden. SSA stated that it is already placing a high priority on return to work and cited a number of actions SSA has taken to implement its return-to-work initiative, such as expanding the pool of vocational rehabilitation service providers. Although these actions are in the right direction, we believe they do not constitute the fundamental redirection of goals and practices necessary to move the DI and SSI programs to a much greater emphasis on return to work. For example, increasing the number of vocational rehabilitation providers does not address the concern of earlier intervention. Fundamental redesign is needed because SSA’s disability programs are designed to be cash benefits programs, not return-to-work programs. Consistent with our recommendation that SSA should identify legislative changes needed to implement a return-to-work program, SSA noted that the law does not provide for, or even allow, many of the return-to-work strategies discussed in our report. Within this context, however, SSA affirmed that it is interested in determining whether the return-to-work practices of other systems could be useful in SSA’s attempts to improve the return-to-work rate of its disability beneficiaries. SSA emphasized that, for such efforts to be fruitful, all players in the complex network of federal disability policy development and program execution would need to be involved, including several federal departments and agencies, state disability and rehabilitation programs, private sector providers, insurance representatives, and employer/union groups as well as the numerous congressional committees that have roles in the development of legislation or in budget approval for the kinds of solutions described in our report. We agree that it is important for all relevant parties to be involved in policy development and program execution. However, as the primary manager of multibillion-dollar programs and as the entity with fiduciary responsibility for the trust funds, SSA must take the lead in forging the partnerships and cooperation that will be needed in redesigning the federal disability programs. SSA also made a number of technical comments, which we incorporated where appropriate. Appendix V contains the full text of SSA’s comments and our evaluation. | Pursuant to a congressional request, GAO identified: (1) key private-sector practices to return disabled workers to the workplace; and (2) other countries' return-to-work strategies for workers with disabilities. GAO found that: (1) U.S. private-sector and foreign return-to-work programs emphasize early intervention to increase workers' motivation to work, setting work goals soon after the disabling event, providing timely rehabilitation services, and having the employer communicate early and often with disabled employees to encourage them to return to work; (2) for individuals who might return to work, disability managers identify and provide specific return-to-work assistance, use case management techniques where appropriate, and ensure that medical personnel are aware of the disabled worker's job functions and the employer's work accommodations; (3) limiting cash benefits and linking retention of medical benefits to employment provides an incentive for disabled persons to return to work; (4) disability managers believe that these return-to-work strategies work most effectively when integrated into a comprehensive program; (5) in contrast, the Social Security Administration (SSA) emphasizes establishing applicants' eligibility for benefits rather than their potential for returning to work and structures cash and medical benefits as disincentives to returning to work; (6) the return-to-work strategies reviewed can be applied to a broad and diverse population with widely varying work histories, job skills, and disabilities; and (7) return-to-work successes could generate significant program savings. |
IHS, an agency within the Department of Health and Human Services, is responsible for providing federal health services to an estimated 1.5 million American Indians and Alaska Natives. In fiscal year 1998, IHS received appropriations of about $1.8 billion to provide these services, with about $291 million of this amount for Alaska. To provide care to Alaska’s estimated 104,305 Natives, most of whom live in small and isolated villages, a three-tiered health care delivery system of local clinics, regional hospitals, and a comprehensive medical center was developed. (See table 1.) IHS’ mission is to provide a comprehensive health services system, while at the same time providing opportunity for maximum tribal involvement in developing and managing programs to meet their needs. The Indian Self-Determination Act gives Alaska Native communities, as well as Indian tribes throughout the United States, the option of replacing IHS as the manager and provider of health care services. To cover the costs of operating such systems on their own, the act authorizes IHS to contract with any of the recognized Alaska Native communities or other tribal organizations, such as regional or village corporations. In Alaska, IHS has established an order of precedence for recognizing various Native entities for purposes of self-determination contracting. In this order of precedence, an individual Native community has priority over an RHO in obtaining contract awards from IHS. If a contract is awarded to an organization that performs services benefiting more than one community, the approval of each community’s governing body (a resolution of support) is a prerequisite. Alaska Native communities that contract directly with IHS manage a relatively small share of health care services in Alaska. Thirty-four of Alaska’s 227 Native communities (15 percent)—which represents about 10 percent of the total Alaska Native population—have obtained funding in direct contracts from IHS to provide some of the health services they receive. (See table 2.) These 34 communities comprise two main groups—25 communities that decided at some point to separate from their RHO to obtain certain services, and 9 communities, mostly in the Cook Inlet area near Anchorage, that generally have not participated in an RHO. Because some communities have banded together for contracting purposes, the 34 communities are involved in a total of 21 contracts, which account for 6.5 percent of IHS’ total contract funding in Alaska under the Indian Self-Determination Act. Of those entities contracting with IHS, the 13 RHOs have the greatest capacity to deliver comprehensive inpatient and outpatient services. The RHOs vary considerably in size. The largest serves more than 20,000 Natives and has a budget of nearly $40 million; the four smallest serve fewer than 2,000 Natives each and have budgets of $2 million to $4 million. (See app. I for details on the 13 RHOs.) Six of the RHOs operate regional hospitals, and all 13 provide community health services to some outlying communities in their areas. Community health services usually include training and placement of community health aides, long-distance physician supervision for the village-based community health aides, itinerant physician and dental coverage, mental health and alcohol abuse programs, and a wide range of other health and social services. Historically, IHS has contracted with RHOs in Alaska because the RHOs were well established when the Indian Self-Determination Act became law in 1975 and because they were able to obtain resolutions of support from the Native communities they represented. However, a Native community has the option of withdrawing its resolution from an RHO and contracting directly with IHS to manage all or part of the health services that previously were provided by the RHO. Communities have pursued this option for a variety of reasons, including the belief that local control will improve the delivery of health services and help them attain self-determination goals. Under the Self-Determination Act, IHS’ authority to decline such community contract proposals is very limited. Twenty-five communities have decided to stop obtaining some services through RHOs and to contract directly with IHS. In total, there are 12 contractors that separated from RHOs because some contracts cover more than one community. These contracts are generally for a limited number of services—most often alcohol and mental health services, community health aides, community health representatives, and other community-based services. Ten of the contracts, for example, involve management of village community health aide clinics, often in conjunction with alcohol education, prevention, and counseling activities. The Native populations served by the 12 contracts range in size from fewer than 30 people to nearly 2,000, and contract awards range from about $100,000 to more than $3 million. (See app. II.) Although these communities, through direct contracting, manage some of their own health services, they most often remain part of the RHO network for other services, such as community health aide supervision and training, physician and dentist services, inpatient care, and management of referrals for specialty services obtained from private providers (known as contract health care). One contractor that separated from an RHO—Ketchikan Indian Corporation (KIC)—has assumed the management of a much broader scope of services. KIC is the largest Native community contractor, serving a Native population of nearly 2,000 and with nearly $3.4 million in fiscal year 1998 funding—one quarter of the 6.5 percent share of Alaska self-determination contract funding received by community contractors. KIC manages a comprehensive primary care health center with a permanent staff of physicians, dentists, nurses, and a wide range of ancillary services, such as laboratory, X-ray, and pharmacy. KIC officials told us that the community decided to manage the health center itself because it was dissatisfied that the RHO did not provide information that it had agreed to provide, such as quarterly financial statements; did not attend KIC tribal council meetings; and had planned to replace the existing health center with a new one in the neighboring village of Saxman rather than on KIC property in Ketchikan. Nonetheless, Ketchikan continues to participate in the RHO and use the RHO’s hospital in Sitka for some inpatient care. Nine of the communities that contract directly with IHS present a somewhat different picture than the 25 communities that separated from an RHO in that they did not previously obtain the contracted services from an RHO. Most of these communities are located in the Cook Inlet (Anchorage) area, where they have access to the extensive resources of the Alaska Native Medical Center. Eight of these nine contractors serve one small Native community each, with populations ranging from 11 to 392. (See app. III.) The ninth contractor, Kenaitze, is exceptionally large, serving a resident population of more than 1,400 Alaska Natives on the Kenai Peninsula south of Anchorage. Kenaitze has administered a health services contract since 1983; its current contract—which is over $1.1 million—provides for a midlevel practitioner clinic with a dentist, a community health representative, and alcohol and mental health services. In addition to the Kenaitze clinic, two other contractors manage clinics with midlevel practitioners, and two manage community health aide clinics with some additional services. Two of the contracts, which were initiated in 1997, are especially limited: Chickaloon Village, which serves 11 Natives with $46,327 in fiscal year 1998 contract funding, and Knik Tribal Council, which serves 39 Natives with $53,079 in fiscal year 1998 contract funding. The Chickaloon and Knik contracts illustrate the extent to which IHS is bound to support village self-determination decisions. When IHS identified funding to open a new midlevel clinic in the Matanuska-Susitna Valley northeast of Anchorage, three Native organizations in that area submitted proposals to manage the clinic: Southcentral Foundation (an RHO), Chickaloon, and Knik. IHS approved Southcentral’s proposal to manage the clinic; in addition, IHS—under rules requiring IHS to approve any severable portion of a self-determination proposal—negotiated with Chickaloon and Knik regarding what services they could provide with their limited per-capita-based shares of the clinic funding. IHS and the villages agreed on transportation for village residents who need services in Anchorage, plus management of contract health care for Knik. Administrative costs are higher under individual community contracts than under contracts with RHOs. Under either contracting arrangement, the Native organization receives the same amount of funding for direct program costs, but IHS has determined that individual communities need more funding for administrative expenses—both to start up the contract and to administer it on an ongoing basis. The higher administrative costs generally reflect lost economies of scale that result from the smaller scope of most individual contracts. Under the Indian Self-Determination Act, an Indian tribe or Alaska Native community that chooses to contract with IHS is entitled to funding for both direct program costs and contract support costs (CSC) to cover administrative functions. In Alaska, these provisions apply both to contracts between IHS and RHOs and to contracts between IHS and individual Native communities. Direct program funding is the amount that IHS would have spent to operate the programs that were transferred to the contractors. CSC funding generally is an additional amount, not normally spent by IHS, that is needed to cover reasonable costs incurred by Native organizations to ensure compliance with the terms of the contracts and prudent management of the programs. Direct program costs are the same regardless of who manages the contracts—communities or RHOs. In contrast, CSC amounts may differ considerably. Determination of CSC needs is based on three cost categories: start-up costs, indirect costs, and direct costs. (See table 3.) The largest cost category is indirect costs, which include most ongoing overhead expenses. For most contracts, indirect costs account for over 80 percent of the recurring CSC funding needs. Our analysis of cost differences between RHO contracts and individual community contracts focused on the first two types of contract support costs—start-up and indirect costs. To provide a consistent comparison, we examined the fiscal year 1998 funding needs of each contractor for these costs as determined by IHS. New and expanded contracts are eligible for start-up CSC funding. If an individual Native community decides to contract separately for services formerly obtained through an RHO, its funding needs for start-up costs represent an increased, one-time cost for the program. IHS records show that the 12 community contracts involving services formerly provided by RHOs received IHS approval for at least $452,000 in start-up CSC needs—ranging from about $22,500 to $140,000 per contract—which were generally based on program size. On average, individual community contractors have considerably higher indirect costs than RHOs would have to manage the same programs. For fiscal year 1998, IHS determined indirect cost needs of slightly more than $3 million for the 12 individual community contracts that separated from RHOs. The IHS official responsible for negotiating these contracts told us that to estimate what the indirect costs would have been if the services provided under the 12 contracts had instead been provided through RHOs, he would use the indirect cost rates in place for the RHOs during fiscal year 1998. Using these rates that he provided, we determined the indirect costs for the RHOs to be about $1.3 million—or less than half of the indirect costs for the community contractors. (See app. IV for a contract-by-contract comparison of indirect cost needs of the Native communities and RHOs.) IHS officials said the main reason individual community contracts had higher indirect costs was that the small size of these contracts resulted in the loss of administrative economies of scale. Because RHOs have an administrative structure in place to support other contracts and services, they can spread the overhead expenses among their programs. Small communities, however, generally have to build the administrative structure for these services alone. We did not compare the indirect costs of the other nine community contracts with those of RHOs because the programs managed by these contracts were not formerly a part of an RHO. However, we found that indirect costs as a proportion of total funding needs that IHS determined for these contracts were similar to those of the 12 community contracts that cover services formerly obtained through an RHO. This would indicate that these contracts also are likely to have higher indirect costs than RHOs. To date, IHS contracting with Native communities rather than RHOs does not appear to have had a significant impact on the level of services available to Alaska Natives, although we did identify a few temporary service disruptions. The small number of these contracts; their generally restricted scope; and in some cases, their recent implementation have likely been key factors in limiting the effects on Native communities or RHOs. However, a shortfall in available CSC funding may jeopardize the continuation of this level of service. Native communities that are not in a financial position to absorb unfunded contract support costs may face the risk of having to divert funds from health services to cover their unfunded contract support needs. We found one instance, in Fort Yukon, where this may already have occurred. When individual Alaska Native communities have contracted directly with IHS to provide some of their own health services, they generally have assumed management responsibility for existing, defined service programs being operated by IHS or an RHO. Because these contracts essentially enable program transfers, the types of services provided do not change initially. In addition, the community contractors generally continue to employ the same staff and use the same facilities. Generally, we did not find that a community’s takeover of services from an RHO in itself had a substantial effect on the types of services provided or service utilization. The service disruptions that we did find in some communities, such as in Ketchikan, and in some clinics staffed by community health aides tended to be transitory in nature. In Ketchikan, when KIC took over the contract from the RHO in October 1997, the health center’s resources, staff, and patient population were split and two separate facilities were established. KIC’s health center initially had a gap in dental services because the RHO retained both dentists when staffing was split. This gap has been partly remedied, and we observed no other gaps in services at the time of our review. However, due to uncertainty surrounding the future of this contract, the staffing situation at both the KIC and RHO clinics was not stable. A review of clinics staffed by community health aides that now are managed by community contractors revealed sharp variations in some communities over past years in the numbers of patient encounters provided. However, these variations did not appear to be related to community contracting because they occurred whether a community or an RHO was managing the services. The variations most likely reflect temporary losses of staff because in small, remote Alaska communities, it takes time and training to replace community health aides. The 1988 and 1994 amendments to the Indian Self-Determination Act clarified that CSC funding should be made available to provide Indian tribes and Alaska Native communities with additional resources to develop the capability and expertise to manage services on their own. The Senate report accompanying the 1994 amendments expressed concern that without this additional support, Indian tribes would be compelled to divert funds from health services to contract support costs. IHS has established two separate pools of CSC funding—one for the recurring CSC needs of ongoing contracts and the other for additional CSC needs of new or expanded contracts. IHS-wide, CSC funding for ongoing contracts has increased from about $100.6 million in fiscal year 1993 to $168.7 million in fiscal year 1998; and since 1994, the Congress has appropriated $7.5 million per year specifically for the CSC needs of new or expanded contracts. However, the demand for CSC funding has greatly exceeded these appropriations. As a result, while IHS has agreed with each contractor on the amount of their CSC funding needs, it has not been able to fully fund those needs. The contractors have the option of delaying or going ahead without full CSC funding, and most of them have chosen to begin implementing their contracts without full funding. Since 1995, IHS has reported a shortfall in CSC funding each year, largely because of the rapid increase in tribal assumption of IHS programs nationwide. For fiscal year 1997, the shortfall totaled $82 million nationwide, over $12 million of it in Alaska. As a mechanism for allocating available CSC funds among contractors, IHS maintains a waiting list for new contractors that have chosen to operate without full CSC funding. Available funding is allocated on a first-come, first-served basis, and a new contractor’s waiting time for full CSC funding may be at least several years. For example, contractors that entered into contracts in 1994 are now at the top of the waiting list and expect to be funded in fiscal year 1998, a 3- to 4-year wait. IHS reports that a continued lack of sufficient CSC funds could, by necessity, result in tribes funding administrative functions with moneys that otherwise would have been used to provide direct health care services. This condition could occur if tribes are unable to realize efficiency gains or do not have other resources to help offset their CSC funding shortfalls. This risk is present in Alaska. Fourteen of the 21 direct community contractors were operating with CSC shortfalls in fiscal year 1998, and 7 of these shortfalls represented between 30 to 74 percent of the contract’s total recurring CSC funding needs. (See app. V for details on the CSC shortfalls by contractor.) Shortfalls of this magnitude could make it difficult for tribes to continue to maintain the same level of health services. The risk is less for RHOs, which also may have CSC shortfalls but generally are in a better financial position than community contractors to manage these shortfalls because they manage large multimillion-dollar operations that can benefit from economies of scale and have multiple sources of revenue that can generate positive cash flow. The varying effects of substantial CSC shortfalls on communities that contract directly with IHS can be seen in Ketchikan and Fort Yukon—which are served by the two largest direct community contractors. In Ketchikan, the large CSC shortfall of over $500,000 a year has not had a negative impact on overall services to the communities involved because both the community contractor, KIC, and the RHO, Southeast Alaska Regional Health Consortium (SEARHC), were able—at least temporarily—to provide additional resources to make up for the funding gap. Prior to October 1997, SEARHC was managing the Ketchikan Indian health center to serve six Native communities—Ketchikan, Saxman, and four outlying communities on Prince of Wales Island. When the health center contract was split, KIC received 58 percent of the funding to serve Ketchikan Natives and SEARHC retained the remainder to serve Saxman and the other communities. Loss of economies of scale occurred in two ways. First, additional clinic space was leased to operate two separate clinics. Second, additional staff were needed to deliver the same level of services in two facilities. For example, the total number of clinical and administrative staff for the clinic before the split was 59.5 full time equivalents (FTE). After the split, the two clinics had a combined total of 68 FTEs. Most of the increase was for duplicated administrative functions, such as the need to have two clinic directors, two business office directors, and two computer programmers. Both SEARHC and KIC had the additional resources to initially absorb the additional costs. SEARHC is a large RHO that manages many federal and state health programs and services for the benefit of Alaska Natives in Southeast Alaska. At the end of fiscal year 1996, its annual budget was over $50 million and it had over $23 million in net assets. Although the Ketchikan clinic had 2 years remaining on its lease, SEARHC decided to lease a new facility nearby for its own clinic to serve Saxman and the outlying communities, asserting that it was not practical to share the original building with KIC. SEARHC spent almost an additional $1 million of its own resources on this new clinic. With the new clinic and additional staff, clinic waiting times for the Saxman Native community were reduced. KIC assumed management of the original clinic with a contract award of nearly $3.4 million and a CSC shortfall of over $500,000. Although it is too soon to determine the long-term impact of this shortfall, KIC has been able to use its tribal government resources—especially management staff from other programs—to reduce the additional administrative need. A large tribe by Alaska standards, Ketchikan has a well-established tribal government with a staff of more than 70 that administers BIA and other federal and state-funded programs totaling at least $2.5 million in addition to the IHS contract. CSC shortfalls have created significant difficulties for the Council of Athabascan Tribal Governments (CATG) in managing the small Fort Yukon clinic and community health aide services in the Yukon Flats area northeast of Fairbanks. CATG, which is a consortium of eight small Native communities, has been operating its $1.8 million contract with an annual CSC shortfall of about $500,000. This shortfall represents almost 53 percent of CATG’s total recurring CSC funding needs. According to its most recent audit report, CATG did not have any additional resources to compensate for a shortfall of this size. The official responsible for CATG operations told us that because CATG did not have resources to cover the CSC funding gap, it had no option but to use some program funds to support administrative functions. There were some indications that CATG’s financial strain may have contributed to other operational problems. In 1997, for example, there was considerable turnover in the Fort Yukon clinic’s physician assistant staff, resulting in vacancies that were not immediately filled. Although the number of outpatient visits at the clinic did not decline substantially, the Native Village of Fort Yukon was so dissatisfied with CATG’s failure to fill the clinic vacancies and with other matters that the village considered asking IHS or the RHO to resume management of the clinic or contracting directly with IHS. In the end, however, no action was taken; and as of April 1998, the Native Village of Fort Yukon remained a member of CATG and was receiving health services through its contract. Through the Indian Self-Determination Act, the Congress has clearly expressed support for Alaska Native communities to exercise their preferences for managing health care resources, such as through an RHO or on their own. Many Native communities view the option to contract directly with IHS as fundamental to their ability to achieve self-determination and self-governance objectives, and about 15 percent of Native communities in Alaska have chosen to do so. However, funds have been available to only partially support the additional administrative costs created by lost economies of scale when Native communities contract directly with IHS. These funding shortfalls appear not to have greatly affected the availability of health services in Alaska at this time, but maintaining the availability of services in the future could pose challenges to some Native community contractors. To the extent that Native communities assume management of a greater portion of their health services in a time of increasing CSC funding shortfalls, the risk for adverse impacts on health services delivery also increases. We provided a draft of this report to IHS officials, who concurred with the report’s findings. In addition, they provided some technical comments, which we incorporated as appropriate. Appendix VI contains the full text of IHS’ comments. We are sending copies of this report to the Secretary of Health and Human Services, the Director of Indian Health Service, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. The information contained in this report was developed by Frank Pasquier, Assistant Director; Sophia Ku; and Ellen M. Smith. Please contact me at (202) 512-6543 or Frank Pasquier at (206) 287-4861 if you or your staff have any questions. This appendix presents data to describe the 13 Alaska Native RHOs in terms of the amount of their fiscal year 1998 contract awards, numbers of Alaska Natives and Native communities served in 1998, and types of facilities operated. Six of the RHOs operate regional hospitals, and all 13 use the Alaska Native Medical Center in Anchorage for treatment of serious illnesses and injuries. Outpatient medical care is provided at three types of facilities: (1) health centers staffed with physicians and dentists; (2) midlevel clinics staffed with physician assistants or nurse practitioners; and (3) village-based clinics that rely on community health aides—who usually are village residents with special training—to provide first aid in emergencies, primary care, and preventive health services under telephone supervision by physicians. Includes communities in the Anchorage and Cook Inlet areas that do not participate in Southcentral Foundation. This appendix describes the 12 community contractors that separated from an RHO, listing the facilities operated and some of the services provided under each contract. Some of the services are somewhat unique to Alaska, and they may vary from one contractor to another, but they generally can be considered as follows: Community health aides usually are village residents trained to give first aid in emergencies, examine the ill, report symptoms by telephone to a supervising physician, and carry out recommended treatments, including dispensing prescription drugs. They also provide preventive health services, such as fluoride treatments, and health education. Community health representatives differ from community health aides by focusing more on social and support services than on health care, although there may be overlap in some areas. Community health representatives may provide general health care, including home health care visits to the elderly and new mothers, along with health education and outreach. Midlevel clinics most often are staffed by nurse practitioners and physician assistants. Contract health care programs purchase services for Alaska Natives from private providers when the services are not available from IHS or tribally operated programs. Alcohol, substance abuse, and mental health programs at the village level often are provided by local residents trained as behavioral counselors, supported by regional professionals. Many program elements are intended to prevent alcoholism, especially in youth, including Alcoholics Anonymous meetings, activities to promote sobriety, and home visits. Emergency medical services at the community level generally focus on safety training and injury prevention, such as swimming and bicycle safety and first aid and CPR (cardiopulmonary resuscitation) training. Some programs provide and monitor fire extinguishers and smoke alarms in the homes. Patient transportation programs generally help coordinate patient travel for necessary health services with local and outside health providers. This appendix describes the nine community contractors that did not separate services from an RHO. (See app. II for definitions of the types of services and facilities these contractors operate.) This appendix compares the recurring funding needs of the 12 community contractors that separated from RHOs with the funding needs of the RHOs for managing the same programs. The total funding needs include direct program costs and direct and indirect contract support costs. A comparison of indirect cost needs is also provided since this is the major cost category that can vary depending on who manages the contract. The indirect cost need for each affiliated RHO is estimated by applying the RHO’s indirect cost rates to the community contractor’s program costs; it represents what the indirect costs would have been if the services provided by the community contractor had instead been managed by the RHO. This appendix details the amount and the magnitude of CSC shortfalls for each of the 21 community contractors. The amount of CSC shortfall is computed by subtracting each contract’s CSC funding from its recurring CSC needs. The magnitude of each contractor’s CSC shortfall is shown by the percent of its recurring CSC needs that is represented by the shortfall. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the impact of individual Indian Health Service (IHS) contracts, focusing on the: (1) extent to which Alaska Native communities contract directly with IHS to manage their own health care services; and (2) effects these contracts are having on costs and the availability of services. GAO noted that: (1) relatively few Alaska Native communities have contracted directly with IHS, and those that have done so generally contracted for a limited range of health services and thus continue to receive many services through a regional health organization (RHO); (2) fifteen percent of the 227 Alaska Native communities have some form of direct contract with IHS; (3) the dollar amount of these direct contracts represents about 6.5 percent of all IHS contracts in Alaska under the Indian Self-Determination Act; (4) GAO found that communities with their own contracts have higher administrative costs than RHOs; (5) IHS works with each contractor to determine the amount of administrative costs needed to manage the contracts; (6) indirect costs--the major component of the administrative costs--include such expenses as financial and personnel management, utilities and housekeeping, and insurance and legal services; (7) community contracts need about twice the amount of indirect costs that a RHO would need to manage the same programs; (8) when a community chooses the contract directly with IHS for services previously provided by a RHO, it also has a need for one-time start-up costs that increase the administrative cost differences between community contracts and RHOs; (9) determining the effects of individual community contracts on service availability proved difficult because contracts involving a switch from RHOs to local communities are relatively few in number, cover few services, and some have been in effect for a short time; (10) the limited comparisons that can be made show that service levels have not been greatly affected by the switches thus far; (11) however, under current IHS funding limitations, new contractors are receiving only part of their funding needs for administrative costs and may have to wait several years to receive full funding; (12) if communities decide to contract for service programs but do not receive full funding for administrative costs and do not have other resources from which to pay for these costs, they face the risk of having to divert funds from services to cover their unfunded administrative costs; (13) while funding shortfalls have not yet resulted in widespread adverse effects on health services availability in Alaska, the long-term picture raises cause for concern; and (14) in choosing to operate their health services without waiting for sufficient administrative funding, Alaska Native communities may have little option but to accept a potential for reduced services as a trade-off for managing elements of their health care systems. |
IRS’ 10,000 customer service representatives are located at 25 call sites around the country. In 1999, IRS began operating this network as a single call center providing round-the-clock service. Managing the network in this way enabled IRS to route calls from three separate toll-free lines—one each for questions about tax law, account services, and refund status—to the sites with the shortest hold times among those customer service representatives assigned to answer questions concerning those issues. (Fig. 1 illustrates call routing within IRS’ toll-free network.) Before IRS began operating the network as a single call center, taxpayer calls were routed by area codes or by the percentage of staff the site had scheduled to work. Calls routed in this manner could not be easily rerouted when a site was experiencing frequent busy signals or lengthy hold times. Although individual call site operating hours and call handling responsibilities varied, IRS expanded its overall toll-free network coverage in January 1999—from 16 hours a day, 6 days a week, to 24 hours a day, 7 days a week. IRS’ call center network is controlled by the Operations Center. In general, the Operations Center is responsible for forecasting call demand—the numbers, types, and timing of calls IRS is expected to receive throughout the planning year on each of its three toll-free lines (tax law, accounts, and refunds); planning the routing of calls among call sites, based on each call site’s assigned toll-free line and subject coverage responsibilities; developing staffing requirements for each call site and monitoring site adherence to those requirements; and monitoring network call traffic status and, when necessary, rerouting calls among the sites to optimize service. The Operations Center develops call site staffing requirements weekly, with call site input and agreement. These requirements prescribe the numbers of trained customer service representatives that are to be available and ready each half-hour to take calls on each assigned subject category and toll-free line. The call sites, in turn, are expected to adhere to the staffing requirements prescribed by the Operations Center. They are generally responsible for recruiting, training, and assigning customer service representatives in sufficient numbers and skills to enable them to meet prescribed staffing requirements. Collectively, IRS call centers employed nearly 10,000 customer service representatives in October 2000. The top picture in figure 2 shows Operations Center officials monitoring network operations, while the picture on the right shows a representative handling a call at IRS’ call center in Atlanta. To address our objectives, we interviewed IRS officials involved in managing toll-free telephone operations, obtained supporting documentation, and reviewed related reports by the Treasury Inspector General for Tax Administration (TIGTA). Although we did not independently verify IRS officials’ responses to our questions, we reviewed them and related documentation for consistency. IRS’ use of other resources will be discussed in a forthcoming report on toll-free performance during the 2000 filing season. We used our human capital self-assessment checklist to obtain an understanding of human capital management, its importance in achieving federal agency operational goals, and the framework that we developed to assist agency leaders in evaluating their human capital management practices. Because people are a key resource for carrying out agencies’ missions, we also reviewed the Government Performance and Results Act’s requirements for agency strategic planning, goal-setting, and performance measurement. To identify human capital management practices used by other organizations in telephone customer service, we obtained information from several sources, including our August 2000 report on human capital management practices of public and private organizations;the 1995 National Performance Review report on best practices in telephone service; and literature on call center management, including Incoming Calls Management Institute information and reports. We did our work at IRS’ National Office in Washington, D.C.; the Office of the Chief Customer Service Field Operations in Atlanta; the Customer Service Operations Center in Atlanta; and six of IRS’ 25 call sites. As agreed with your office, we judgmentally selected the six sites to ensure geographic coverage and other characteristics and, therefore, cannot project our results to all 25 call sites. Because IRS began providing 24-hour coverage in 1999, we included the two call sites that operated 24 hours a day, 7 days a week and four sites operating fewer than 24 hours a day. Because some call sites were colocated with IRS service centers that had large labor pools from which the sites might recruit staff, the six sites included three that were colocated with service centers and three that were not. To understand human capital management practices within the context of IRS’ new organizational and operational structure, our sample includes three sites that were designated to serve taxpayers with incomes from wages and investments and three sites that were designated to serve small business and self-employed taxpayers. Since differences in site staffing levels could lead to differences in their human capital management practices, we selected two sites each from the low, middle, and high ranges of staffing levels among the 25 call sites—less than 200 staff, between 200 and 400, and more than 400, respectively. The characteristics of the six sites are shown in table 1. We performed our work between May 1999 and October 2000 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue. The comments are discussed near the end of this report and are reprinted in appendix II. IRS faces an annual challenge in determining the staffing level for its toll- free telephone customer service operations. IRS has not established a long-term, desired level-of-telephone-service goal based on the needs of taxpayers and the costs and benefits of meeting them, and then determined what staffing level is needed to achieve that service level. Rather, IRS annually determines the level of funding it will seek for its customer service workforce, based on its judgment of how to best balance its efforts to assist taxpayers and to ensure their compliance with tax laws, and then calculates the expected level of service that funding level will provide. IRS’ approach to setting this goal is inconsistent with federal guidance on strategic planning, which calls for agencies to develop strategic goals covering at least a 5-year period and to determine the staffing and other resources needed to achieve the goals. IRS’ approach is also inconsistent with industry practices, which base their goals and staffing on customer needs. Without a long-term level-of-service goal, as well as annual goals aimed at achieving the long-term goal over time, IRS lacks meaningful targets for strategically planning and managing call center performance and measuring improvement. In commenting on a draft of this report, the Commissioner stated that IRS planned to set strategic goals and staff to meet those goals. IInn the absence of a long-term goal, and multiyear plans for reaching it, IRS has estimated the service it could provide based on different staffing levels. For example, when formulating its fiscal year 2000 budget, IRS estimated that it would receive over 100 million calls on its three toll-free lines throughout the fiscal year and that its customer service representatives could handle an average of 5.6 calls per hour that they were available to take calls. These workload and productivity assumptions were the basis for calculating the expected levels of service IRS could provide with different staffing levels. Specifically, with customer service representative levels ranging from 8,291 to 10,800 full-time- equivalent staff, IRS estimated that it could achieve levels of service ranging from 58 to 80 percent, respectively. Because of the need to balance service and compliance activities within overall staffing budget limitations, IRS decided to request funding at the lower level, establishing a 58-percent level-of-service goal for fiscal year 2000 and a 60-percent level for fiscal year 2001. A long-term, results-oriented goal is important because its provides a meaningful sense of direction as well as a yardstick for measuring the results of operations and evaluating the extent of improvements resulting from changes in resources, new technology, or management of human capital. The Government Performance and Results Act of 1993 required executive branch agencies to develop multiyear, strategic plans covering at least a 5-year period; describe the human and other resources needed to achieve goals; update these plans at least every 3 years; prepare annual performance plans with annual performance goals; and measure and report annually on its progress toward meeting those set long-term, output- or results-oriented goals in these strategic plans; goals. Under the act, strategic plans are the starting point for agencies to set annual performance goals aimed at achieving their strategic goals over time. As part of the strategic planning process, agencies are required to consult with Congress and to solicit the views of other stakeholders who might be affected by the agencies’ activities. Unlike IRS, officials at all seven public and private call center operations we visited as part of our August 2000 report said that they determined staffing requirements based on their customers’ needs and clearly articulated service-level goals—that is, the percentage of calls to be answered within a given time frame. For example, the Social Security Administration (SSA)—an agency that is also subject to federal budget constraints, had a goal of 95 percent of its callers getting through on its toll-free line within 5 minutes of their first attempt. This goal was established with input and support from Congress and top SSA leadership as part of a government wide effort to improve customer service. According to an SSA associate deputy commissioner, the focus on improving telephone customer service followed a period of very poor service in the early and mid-1990s, when as many as 49 percent of callers got busy signals when they called the toll-free number. The associate deputy said that congressional stakeholders continue to monitor SSA’s toll-free telephone operations, resulting in continued support by SSA management to allocate the resources needed to meet established goals. Other studies have also documented the importance of setting service- level goals based on customers’ needs. One guide to call center management for practitioners that we reviewed underscored the importance of service-level goals. It described service level as “the core value” at the heart of effective call center management, without which, answers to many important questions, including “How many staff do you need?” would be left to chance. It said service-level goals should be realistic, understood by everyone in the organization, taken seriously, and funded adequately. While the guide also recommended benchmarking, formally or informally, with competitors or similar organizations, it stated each organization should determine an appropriate service level for its call centers, considering its unique circumstances. These considerations should include the labor and telephone equipment costs of answering the call, the value of the call to the organization, and how long callers are willing to hold for service. IRS recognizes the need to establish long-term goals and is considering adopting some of the measures used by other organizations and establishing goals for those measures. For fiscal year 2001, for example, IRS plans to measure the percentage of callers who reach IRS within 30 seconds. While IRS has not established a long-term goal for this measure, it has set an interim goal of 49 percent for fiscal year 2001. In commenting on a draft of this report, the Commissioner stated that IRS had instituted an agencywide strategic planning process in March 2000 that links the budget and available resources to its strategies and improvement projects. According to the Commissioner, IRS’ fiscal year 2002 Strategic Plan and Budget will include a 74 percent level-of-service goal, with a goal of reaching 85 to 90 percent by fiscal year 2003. Also, IRS had an initiative under way to improve workload planning to ensure that customer needs are considered during the planning and budgeting process. The six call sites we visited faced challenges in successfully recruiting, training, retaining, and scheduling customer service representatives. According to site officials, these challenges included difficulties recruiting representatives due to job characteristics, training representatives and keeping them proficient, retaining skilled representatives, and scheduling representatives to meet forecasted staffing requirements. Officials at five sites said they experienced some degree of difficulty in recruiting representatives because of job characteristics such as the seasonal nature of the positions, undesirable work hours, or the stressfulness of the work. Nevertheless, five of the sites were able to fill their vacant positions. One site was unable to fill its needs and had concerns about the suitability of the persons hired. According to officials at this latter site, due to the limited time between the date they were provided the number of positions to fill and the time that the new employees had to report for work, the officials did not have sufficient time to interview all applicants before hiring them. Officials at each IRS call site were responsible for hiring representatives for their location, including deciding what recruiting methods and applicant screening tools to use. All six sites used some combination of conventional recruiting methods, such as newspaper advertisements and college campus recruiting. To determine the suitability of applicants, beyond the basic qualifications for the position, officials at four sites interviewed applicants before hiring them, and most used interview techniques to determine how applicants might behave in typical work situations. Two of these four sites also administered a five-question, tax- related math test to assess a candidate’s basic math and analytical skills. In an effort to improve its recruiting for customer service representatives, IRS is in the early stages of developing a national recruiting strategy. As part of this plan, IRS is determining where it should target its recruiting efforts. IRS is identifying sites where IRS’ salary and benefits make it a competitive employer in the local job market and sites that have trouble recruiting and retaining suitable applicants. Officials believe this will help IRS determine which sites should be growth sites for hiring telephone customer service representatives. According to officials at the call sites we visited, the many obstacles that affected their ability to train customer service representatives and keep them proficient included the broad range of complex topics representatives must address, inadequate resources, the cyclical nature of taxpayer demand, reassignment of tax topics among representatives, and the lack of a formal mechanism to identify individual refresher training needs. Each year, IRS must train thousands of customer service representatives in a broad range of topics, and according to officials at the six sites we visited, they sometimes had to do so without adequate resources. Topics range from the status of refunds to more complicated issues such as capital gains or losses. In fiscal year 1999, the standard training curriculum provided by all sites generally included periods of classroom instruction, followed by periods of on-the-job training that were roughly half the length of the classroom instruction. This training was delivered incrementally over a 3-year period, between the busy filing seasons, during which IRS receives the bulk of its toll-free calls. The training program also included annual tax law/procedural update training. However, after customer service representatives received their initial training, they generally did not receive subsequent refresher training despite the cyclical nature of the work. Officials also cited a shortage of instructors, limited training time, and outdated training materials as other factors that affected their ability to effectively train customer service representatives. For example, officials at the one site that did not hire the number of representatives authorized said they did not have enough instructors to provide the necessary training. Officials at three sites said that they did not have sufficient time to fully train representatives before their peak season because they did not receive timely notice of when, and how many, they could hire. Officials at four sites also said that training materials provided by the National Office were frequently outdated. Keeping customer service representatives proficient was also a challenge for the sites due to the cyclical nature of taxpayer demand and changes to the topics representatives were expected to know. The frequency of the calls and the topics covered varied throughout the year. The bulk of the calls are generally received during the busy filing season. For example, more than 57.6 of the 79.6 million toll-free calls made to IRS in fiscal year 2000, or 72 percent were made from January through June. In addition, calls received from January through April predominantly involved tax law topics, while calls received after April mainly involved account- and refund-related topics. Consequently, customer service representatives could go long periods, such as months between filing seasons or even years since topic training was completed, without receiving calls to reinforce their experience on some of the topics for which they were trained. Moreover, this situation was compounded when IRS implemented centralized call routing in 1999. In conjunction with this change, IRS consolidated the number of subject categories, which ranged from 40 to 125 depending on the site, and reassigned representatives to a broader group of 31 categories. This was done without ensuring that they had adequate training or experience. According to a site official, inadequate training is one factor reducing the accuracy of IRS responses to tax law and account calls. From 1998 to 1999, for example, network accuracy for account calls decreased from 87.9 to 81.7 percent, according to IRS’ weekly customer service snapshot report dated September 30, 1999. Officials at the sites we visited also said that the lack of a formal mechanism to identify which representatives needed refresher training hindered their ability to keep their representatives proficient. Officials have records of specific training each representative has received, but they do not have a method for assessing individual competency gaps—i.e., between knowledge and skills needed to respond to calls and current proficiency—to quantify each representative’s refresher training needs. Although IRS had developed such a system and began using it in December 1998, a customer service training official said testing was not done consistently among the call sites, and refresher training was not provided to meet identified needs. The official also said a lack of funding and uncertainty of future organizational developments led IRS to discontinue the system in 1999. Because IRS does not have a system for assessing competency gaps to identify the specific refresher training needs of individual representatives, call sites waste scarce training resources trying to improve the performance of customer service representatives. For example, officials said they sometimes send groups of representatives to refresher training, knowing that some representatives will probably receive training they do not need. This happens because the course covers several subjects and each representative probably needs some of the training but most representatives probably do not need all of the training. Providing unnecessary training wastes resources that would otherwise be available for representatives who need additional training. “Fundamentally, we are attempting the impossible. We are expecting employees and our managers to be trained in areas that are far too broad to ever succeed, and our manuals and training courses are, therefore, unmanageable in scope and complexity…. The next step is to rethink what we should do at each site in order to achieve greater site specialization.” Because of the problems involved in attempting to provide the full range of training to all customer service representatives, in fiscal year 2000, IRS began refocusing its program to provide just-in-time training, targeted more to the specific types of questions taxpayers call about at different times throughout the year. In addition, as part of restructuring, IRS intends to further specialize training to serve specific taxpayer groups– those who receive income from wages and investments and those who receive income from small businesses or self-employment. IRS’ training related plans do not, however, address the need for identifying competency gaps to determine refresher training needs and target training accordingly. A National Office official informed us that IRS was working with the Office of Personnel Management to “develop competency models, document career paths, and develop assessment instruments for use in training, development, selection, etc., for all of the occupations within the IRS.” Due to the broad scope of this endeavor, however, the official could not say when IRS could expect to establish and implement a mechanism for assessing the refresher training needs of customer service representatives and ensuring that the training is provided. Despite its substantial investment in recruiting and training its network of 10,000 customer service representatives, and concern by National Office and some site officials that attrition was higher than it should be, IRS was not actively monitoring attrition and determining what steps, if any, were needed to address it. Officials do not track how many representatives leave, why they leave, or where they go—data that would be key to a strategy for decreasing attrition. A recent study of experiences at 186 call centers indicates that attrition is a major problem for the industry that is expected to worsen. Some of the organizations we contacted as part of our August 2000 report, however, were not as concerned about their attrition. They said most of their attrition was to other jobs within their organization and thus benefited the overall organization. None of the six sites we visited could provide attrition statistics for customer service representatives for 1998 or 1999. Officials at four sites provided estimates ranging from 13 to 19 percent per year; however, these estimates were just their opinions—they were not based on data collected by the site or the National Office. Although IRS did not monitor attrition, National Office officials and officials at three sites said that attrition was a problem. Only one of the six sites had collected data to determine the reasons why representatives left; officials at the remaining five sites and the National Office had opinions about why representatives left. In addition, IRS did not monitor whether the representatives who left obtained other jobs within or outside of IRS. Examples included the stressful nature of the work, seasonal employment, and better opportunities elsewhere. resources recruiting, hiring, and training representatives, only to lose them to other organizations. Some of the organizations included in our August 2000 report had high attrition, but officials said that attrition from their call centers was usually to other positions within their organizations. For example, at one company, officials noted that policies to promote from within and encourage employee mobility, allowed customer service representatives to move to more senior positions within the company. IRS faces challenges in effectively scheduling staff—that is, having the right number, with the right skills, at the right time, at each call site—due to inaccurate demand forecasting and a complicated staff scheduling process. During the first 6 months of fiscal year 2000, IRS data indicated that for 60 percent of the time call sites were overstaffed or understaffed compared to tolerances established by IRS. In addition, IRS’ method for measuring call sites’ adherence to their schedules was incomplete. Recognizing its problems with forecasting and scheduling, IRS was adapting an automated system similar to those used by other organizations. Inaccurate forecasting of the expected fiscal year 2000 toll-free call volume led to inefficient scheduling and use of staff at some sites. The Operations Center estimated that IRS would receive 100 million calls in fiscal year 2000, but IRS actually received about 80 million—20 percent less than forecasted. Because individual site staffing requirements were based on IRS’ forecasts of the expected numbers, types, and timing of calls, network and individual site work plans were also overstated, resulting in the underutilization of staff at some sites. For example, according to TIGTA’s March 2000 report, for the period December 5, 1998, through March 15, 1999, overstated call demand resulted in staff being scheduled and ready to take calls, but getting no calls, an average of 10 percent of their time at six sites for which data were available. Operations Center officials stated that IRS’ increased use of new routing technologies, combined with continuous organizational and procedural changes, made accurate forecasting difficult. Moreover, they believed the information that IRS had about historical demand was of limited value in predicting future demand for two reasons. First, the historical information was not based on operating 24 hours a day; and second, it was difficult to take into account the constantly changing environment (i.e., tax law changes and increased use of electronic filing and Web-based services). However, the Directors of Customer Account Services, whose staffs have responsibility for providing telephone customer service to wage and investment and small business and self-employed taxpayers, stated that demand forecasting should improve now that IRS has 2 years of information based on operating 24 hours a day. Managers at most of the sites we visited stated that the complicated scheduling process made it difficult to ensure that the appropriate staff were scheduled to work at the right times. They were also concerned about the amount of time they spent scheduling and rescheduling staff in attempting to ensure that they had scheduled the number of staff with the skills the Operations Center prescribed for each half-hour increment of service time. IRS management had not developed a standard system for the sites to use in helping them to develop their site schedules. As a result, each site we visited used its own system to track variables related to each customer service representative, such as the specific work schedule agreement, planned vacation and training, and skill level in answering certain types of calls. Site managers then used these variables to develop site schedules. Managers explained that the large number of variables to consider when doing so (e.g., more than 160 different work schedules at one site) complicated the scheduling process and made it difficult for them to optimize their day-to-day efforts to meet the staffing requirements prescribed by the Operations Center. IRS’ own statistics bear this out. At the times IRS measured, call centers were either understaffed or overstaffed, compared with the Operations Center’s prescribed staffing schedule, 60 percent of the time—24 percent and 36 percent, respectively, during the first 6 months of fiscal year 2000. In measuring site adherence to its prescribed staffing requirements, the Operations Center considers variances of more than 10 percent (of the total number required to be ready for each half-hour period) as overstaffing or understaffing. The Operations Center only partially measures each site’s ability to meet the prescribed staffing requirements. The current measurement system determines if each site had, on average, the required number of customer service representatives available to answer the telephone for each half- hour period. However, the Operations Center did not measure the extent to which sites provided representatives with the required skills. IRS is working with a contractor to refine a commercially available automated system to facilitate forecasting demand, scheduling staff, and tracking adherence to the schedule. The system is expected to use historical data to more accurately forecast call demand (volume, type, and timing of calls) and to centrally compare information on site staff resources (e.g., availability and skills) in relation to forecasted demand to help ensure that network staffing schedules make optimum use of available site staffing. This system is also expected to identify individual site staffing options for meeting network requirements, thus reducing the amount of time site managers spend on scheduling staff. According to Operations Center officials, the contractor was still refining the commercial version of the system because it was not designed to handle the size and complexity of IRS’ toll-free operations (e.g., the number of call sites and customer service representatives and the range of topics). According to the project leader responsible for this system, both system hardware and software were in place at all call centers prior to October 2000, but the software is not yet fully operational. Even though IRS now has 2 years of information based on operating 24 hours a day, it did not gather that data in a consistent format. The system’s forecasting and scheduling capability will not be usable until IRS has collected at least 1 year of call demand data in a consistent format. The project leader was not sure when IRS would have these data because data collection efforts were delayed in order to make changes that would allow IRS to capture more data than originally planned and in a reconfigured format. Also, the planned transfer of certain functions from the Philadelphia Service Center to the Operations Center was more than a year behind schedule in October 2000. Moreover, the project leader said IRS’ restructuring could cause further delays in achieving full system capability. Other organizations included in our August report used an automated system similar to the one IRS is implementing. For example, one company used an automated system to identify its short- and long-term staffing requirements. The system assisted call center managers in forecasting call demands and scheduling staff to meet the demands. Officials said the system also enabled the company to significantly reduce the time needed to perform these tasks. It forecasted call demand down to half-hour intervals, based on historical data trends. Considering various assumptions about call patterns and information such as the number of customer service representatives available to take calls, on leave, or in training, the system also generated a staffing schedule. The schedules were reviewed daily and adjusted as needed. IRS also faces challenges in evaluating its human capital management practices. According to our self-assessment checklist, all human capital policies should be designed, implemented, and assessed by the standard of how well they help the organization pursue its mission, goals, and objectives. While IRS evaluates its practices to make improvements in some areas, such as recruiting or training, the evaluations do not assess how individual or collective human capital policies and practices affect its ability to achieve level-of-service goals. Its evaluations also generally did not consider how improving practices in one area might affect other areas. Unlike IRS, some organizations consider how their human capital management practices affect their operational goals and how changing one practice may affect another. Without expanding its evaluations to include such analyses, IRS is unlikely to optimize the efficiency and effectiveness of its toll-free operations. Except for retention, IRS evaluated its human capital practices, to some extent, in most areas, including recruiting, training, and scheduling to improve those areas. These evaluations generally focused on how each practice could be improved for the next year. While these evaluations are useful for making short-term adjustments, they do not provide a basis for strategic planning because they do not assess how human capital management practices may need to be revised to support a long-term level-of-service goal. Additionally, IRS evaluations generally do not consider how making changes in one area affects other areas. For example, IRS evaluations of recruiting did not consider how improving retention practices might reduce attrition, decrease resources spent on recruiting and training new employees, or increase the resources available for improving the skills and productivity of existing employees. Unlike IRS, other organizations have evaluated the effects of changes in one human capital practice on other practices as well as on the overall results of their telephone assistance operations. For example, one company used training results to identify successful new hires. First, officials determined the characteristics that recruits who did well during training had in common. Then, the company changed its recruiting practices to identify and hire similar people. The Incoming Calls Management Institute recommended doing something similar—identify the personality traits and skills of top performing customer service representatives and use those traits to help assess persons applying for a representative position. IRS faces significant challenges in managing its human capital to provide telephone customer service to taxpayers. IRS has made or planned substantial improvements to help meet these challenges, but further improvements are needed. IRS will have difficulty improving its telephone service without setting a long-term, desired service-level goal that is based on the needs of taxpayers, as well as annual goals aimed at making progress toward reaching its long-term goal. As the Government Performance and Results Act and SSA experience suggest, IRS will also need support for its long- and short-term goals from congressional stakeholders. IRS’ telephone customer service workforce represents a substantial human capital investment in providing assistance to taxpayers. To get the most from this investment, IRS must be able to (1) target scarce training resources where they are most needed to optimize call center and network performance, (2) minimize turnover of trained and experienced customer service representatives to avoid unnecessary recruiting and training expenditures and enhance productivity, and (3) determine how its individual or collective human capital policies and practices affect its ability to achieve customer service goals and how changes in one or more human capital management practices will affect other practices. However, until IRS establishes a system for assessing competency gaps to identify the refresher training needs of individual customer service representatives, it cannot effectively target scarce training resources to meet individual training needs. Without a system for monitoring attrition, identifying its causes, and taking steps to address them, IRS cannot ensure that its recruiting and training resources are used efficiently. And, unless IRS considers its human capital management practices’ contribution to achieving overall service goals and considers the interrelationships among its toll-free service human capital practices, it lacks a good basis for assessing the soundness of those human capital practices. We are recommending that the Commissioner of Internal Revenue take several steps to improve IRS’ human capital management practices related to providing telephone customer service. Specifically, the Commissioner should establish a long-term, desired service-level goal based on taxpayers’ needs, together with annual goals designed to make progress toward reaching that long-term goal over time, and work with congressional and other stakeholders to obtain their support and the resources needed to reach those goals; establish a system for assessing customer service representatives’ competency gaps and meeting the refresher training needs identified by the assessments; develop a system for monitoring call center attrition and identifying its causes and use the information gathered from that system to develop, as appropriate, strategies for dealing with the attrition of customer service representatives; and ensure that IRS’ evaluations of human capital management practices consider the effects of those practices on its ability to achieve long- and short-term customer service goals and the interrelationships among human capital practices. The Commissioner of Internal Revenue provided written comments on a draft of this report in a January 12, 2001, letter, which is reprinted in appendix II. We also met with senior IRS officials on January 4, 2001, to discuss our draft report and to obtain updated information on IRS’ new toll-free measures and goals. The Commissioner agreed with our recommendations, which he said should improve performance in this critical area. In addition, he provided information summarizing IRS’ efforts relating to each recommendation and commented that IRS’ efforts reflected the constructive dialog between IRS and our staff. We incorporated the new information and modified the report, where appropriate, to reflect IRS efforts. The Commissioner’s letter stated that IRS had instituted an agencywide strategic planning process in March 2000 that links the budget and available resources to its strategies and improvement projects, but also recognized the need to strengthen that new process. Toward this end, the Commissioner stated that IRS’ fiscal year 2002 Strategic Plan and Budget reflects a 74-percent level-of-service goal, with a goal of reaching 85 to 90 percent by fiscal year 2003. This plan was not yet available as we were preparing this report. He also stated that an initiative was under way to improve workload planning to ensure that customer needs are considered during the planning and budgeting process. The Commissioner’s letter did not say how the cited workload planning initiative would identify and assess customer needs. Based on the Commissioner’s comments, significant efforts were under way or planned to help ensure that customer service representatives will have the competencies and training needed to respond to taxpayer calls. In addition to the targeted training and planned specialization discussed in this report, for example, IRS plans to establish competency-based recruiting and retention methods to help ensure that IRS recruits and retains individuals who are well-suited to telephone customer service work. The Commissioner’s comments also stated that IRS’ competency- based management plans include the use of “assessment instruments to identify training needs.” These initiatives seem to be promising and may form a basis for identifying individual refresher training needs and ensuring that these needs are met. The Commissioner’s comments also recognized the importance of retaining skilled representatives. His comments identified several efforts that focused on identifying employees that may be more likely to remain with IRS. He did not comment on monitoring why employees leave or on using this information to strengthen IRS’ efforts to retain skilled representatives. Regarding IRS’ evaluations of its human capital practices, the Commissioner’s comments did not respond directly to the primary point of our recommendation—that IRS evaluations should consider the effects of its practices on its ability to achieve its long- and short-term customer service goals. However, the Commissioner did say that IRS has embraced our Human Capital Self-Assessment Checklist for Agency Leaders. IRS had used it as a diagnostic tool in its recent review of its mid- and top-level management realignment process and planned to use it again in fiscal year 2001 to “conduct an overview of the status of human capital practices throughout the Service.” Our checklist provides a framework by which agency leaders can develop informed views of their agencies’ human capital policies and practices. The Commissioner also objected to our comparing IRS’ 1998 performance with performance in subsequent years, because of the many changes to IRS’ operating environment, such as enterprise call management and 24-hour operations. This report compared IRS’ reported tax law and account accuracy in 1998 and 1999. As stated in our evaluation of the Commissioner’s comments on our 2000 filing season report, we believe it is appropriate to compare IRS’ performance before and after the operational changes mentioned above. In reevaluating the examples we used, however, we decided to eliminate our reference to IRS’ reported tax law accuracy because we learned that the methods used to measure tax law accuracy changed in 1999, and thus, results may not be comparable. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member of the Subcommittee on Oversight; the Chairman and Ranking Minority Member, Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and Director, Office of Management and Budget. We will also send copies to others upon request. If you have any questions, please call me at (202) 512-9110 or Carl Harris at (404) 679-1900. Key contributors to this report are acknowledged in appendix III. Table 2 describes the organization, mission, and telephone center operations of the private and public organizations that were included in the scope of our August 2000 report. Customer Service: Human Capital Management at Selected Public and Private Call Centers (GAO/GGD-00-161, Aug. 22, 2000). Telephone customer service operation A telephone hotline that provided consumers with product information and responded to questions about repairs. The answer center, located in Louisville, KY, handled about 2 million calls each year. About 200 telephone customer service representatives responded to inquiries 24 hours a day, 7 days a week. Mission One of 11 core businesses of General Electric. Manufactures appliances, including refrigerators, ranges, dishwashers, microwave ovens, washing machines, dryers, water filtration systems, and heating systems. Also provides repair and maintenance services on appliances, operating a nationwide fleet of service vans. Designer, developer, and manufacturer of computer products, including personal computers, printers, computer workstations, and a range of hardware and software. The Hewlett-Packard Company Executive Customer Advocacy Group provided support for customers contacting Corporate Headquarters regarding issues or concerns with products and services. The hotline was located in Palo Alto, CA. It operated from 8 a.m. to 5 p.m., Monday through Friday, with a staff of about 22 full-time- equivalent employees who were Hewlett-Packard Company retirees in part-time positions. One call center in Springfield, IL, was staffed by 34 full- time telephone customer service representatives, who were assisted during busy times by cross-trained employees from other areas within the taxpayer assistance division. Toll-free telephone lines were open from 8 a.m. to 5 p.m., Monday through Friday, with extended weekday hours and one Saturday opening during filing season. Automated service was available 24 hours a day, 7 days a week. The call center provided taxpayers with help in completing their returns and answered questions about taxes, returns, bills, and notices that had been filed. In the Product Sales and Service Division, about 6,900 telephone customer service employees provided information on product sales and service. Call centers operated 24 hours a day, 7 days a week. Collects taxes for the state and its local governments, including income and business taxes on individuals and businesses, income and sales taxes, taxes on public utilities, tobacco and liquor, motor fuels and vehicles. The department also administers tax relief programs for the elderly and disabled and provides property assessments among the state’s counties. Designer, developer, and manufacturer of information technologies, including computer systems, software, networking systems, storage devices, and microelectronics. America’s largest not-for-profit health maintenance organization, serving over 8 million members in 17 states and the District of Columbia. An integrated health delivery system, Kaiser Permanente organizes and provides or coordinates members’ care, including preventive care, hospital and medical services, and pharmacy services. Kaiser Permanente had 17 call centers nationwide, with 12 centers located in California, the largest region. Regional call centers operated independently. The California region, where we visited, had 5.9 million members, while other regions had fewer than 1 million members each. The two largest call centers were located in Stockton and Corona, CA. Together, they employed about 475 telephone customer service representatives and about 80 management and support staff. Hours of operation were 7 a.m. to 7 p.m., 7 days a week. The member service call centers provided answers to questions on health plan- related topics, including benefits, copayments, claims, Medicare, eligibility, available services, and physician information. Mission Manages the nation’s social insurance program, consisting of retirement, survivors, and disability insurance and supplemental security income benefits for the aged, blind, and disabled. Also assigns Social Security Numbers to U.S. citizens and maintains earnings records for workers under these numbers. World’s largest package distribution company, it transports more than 3 billion parcels and documents annually. Telephone customer service operation Thirty-six call centers nationwide were staffed by 3,100 full-time, 700 part-time, and up to 60 percent of about 4,100 spike employees who were available to assist at busy times. Toll-free telephone lines were open from 7 a.m. to 7 p.m., Monday through Friday, to answer callers’ questions about Social Security benefits and programs. Coordinator of Utah taxes and fees, including taxes on income, sales, property, motor vehicles, fuel, beer, and cigarettes. Nine call centers nationwide were staffed by over 6,800 customer service representatives. Eight centers were open from 7 a.m. to 9 p.m., Monday through Friday. One center in San Antonio, TX, operated 24 hours a day, 7 days a week. Seven of the nine call centers were staffed by contract employees. The Newport News, VA, call center, which was a contract facility we visited, had 230 representatives who handled calls related to pick-up, tracking, and claims. Three call centers—a main call center, motor vehicle center, and collection center—operated weekdays from 8 a.m. to 5 p.m. with about 35 telephone customer service representatives. The call centers responded to about 15,000 to 20,000 inquiries a month dealing with a range of questions on programs administered by the Commission. For these organizations, we conducted a telephone interview in which we asked managers of telephone customer service operations several key semistructured interview questions. However, we did not have detailed discussions with officials and employees at various levels of the organizations. We judgmentally selected the organizations to visit and telephone by reviewing literature on innovations in human capital management and by obtaining opinions from experts on what organizations they thought provided noteworthy or innovative human capital management in their call center operations. We chose telephone customer service operations that dealt with tax questions or specific subjects, such as benefits, investments, and installation and operation of technical equipment, that were comparable in complexity to tax issues addressed by IRS customer service representatives. Specifically, the director for Workplace Quality at the U.S. Office of Personnel Management identified the SSA telephone customer service operation as a public sector organization that is known for effective human capital management. We visited the Illinois and California State tax agencies and telephoned the Utah State Tax Commission on the basis of recommendations of an official from the Federation of Tax Administrators. The Canada Customs and Revenue Agency was cited in literature as having an internationally recognized reputation for high- quality taxpayer service and had participated—along with IRS and the tax agencies of Australia and Japan, members of the Pacific Association of Tax Administrators, in a benchmarking study of customer service best practices. Two private sector companies we visited—Kaiser Permanente and Allstate Insurance—were selected in consultation with the executive director of the Private Sector Council. The Council, with membership including about 50 major U.S. corporations, seeks to improve the productivity, management, and efficiency of government through cooperation with the private sector. Members volunteer expertise to government agencies by participating with them in projects that are coordinated through the Council. The other private organization we visited, the United Parcel Service, was selected in follow-up to our participation in a congressional delegation and IRS visit to its Atlanta, GA, headquarters to discuss human capital and telephone customer service issues. The private call centers we telephoned—General Electric (GE) Answer Center, Hewlett-Packard Company Executive Customer Advocacy Group, and International Business Machines (IBM) Business Product Division, and/or their parent corporations—were cited in best practices literature for their effective human capital management. In addition to those named above, Robert Arcenia, Ronald Heisterkamp, Mary Jo Lewnard, and Shellee Soliday made key contributions to this report. | Each year, the Internal Revenue Service (IRS) determines the staffing level for its toll-free telephone customer service operations. GAO found that IRS lacks a long-term telephone customer service goal that reflects the needs of taxpayers and the costs and benefits of meeting that goal. Rather, IRS annually determines the level of funding it will seek for its customer service workforce, using its judgment of how to best balance service and compliance activities. IRS then calculates the level of service that funding levels will provide. This approach is inconsistent with the Government Performance and Results Act and the practice of selected public and private call centers that field questions. IRS recognizes the shortcomings of its personnel management and will include performance measures and goals in its 2002 strategic plan. According to IRS officials, the agency also faces challenges in recruiting, training, retaining, and scheduling customer service representatives. IRS is developing a strategy to address each of these issues. |
Mobilization is the process of assembling and organizing personnel and equipment, activating or federalizing units and members of the National Guard and Reserves for active duty, and bringing the armed forces to a state of readiness for war or other national emergency. It is a complex undertaking that requires constant and precise coordination between a number of commands and officials. Mobilization usually begins when the President invokes a mobilization authority and ends with the voluntary or involuntary mobilization of an individual Reserve or National Guard member. Demobilization is the process necessary to release from active duty units and members of the National Guard and Reserve components who were ordered to active duty under various legislative authorities. Mobilization and demobilization times can vary from a matter of hours to months, depending on a number of factors. For example, many air reserve component units are required to be available to mobilize within 72 hours, while Army National Guard brigades may require months of training as part of their mobilizations. Reserve component members’ usage of accrued leave can greatly affect demobilization times. Actual demobilization processing typically takes a matter of days once the member arrives back in the United States. However, since members earn 30 days of leave each year, they could have up to 60 days of leave available to them at the end of a 2-year mobilization. DOD has six reserve components: the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Naval Reserve, and the Marine Corps Reserve. Reserve forces can be divided into three major categories: the Ready Reserve, the Standby Reserve, and the Retired Reserve. The Total Reserve had approximately 1.2 million National Guard and Reserve members at the end of fiscal year 2004. However, only the 1.1 million members of the Ready Reserve were subject to involuntary mobilization under the partial mobilization declared by President Bush on September 14, 2001. Within the Ready Reserve, there are three subcategories: the Selected Reserve, the Individual Ready Reserve (IRR), and the Inactive National Guard. Members of all three subcategories are subject to mobilization under a partial mobilization. At the end of fiscal year 2004, DOD had 859,406 Selected Reserve members. The Selected Reserve’s members included individual mobilization augmentees—individuals who train regularly, for pay, with active component units—as well as members who participate in regular training as members of National Guard or Reserve units. At the end of fiscal year 2004, DOD had 284,201IRR members. During a partial mobilization, these individuals—who were previously trained during periods of active duty service—can be mobilized to fill requirements. Each year, the services transfer thousands of personnel who have completed the active duty or Selected Reserve portions of their military contracts, but who have not reached the end of their military service obligations, to the IRR. However, IRR members do not participate in any regularly scheduled training, and they are not paid for their membership in the IRR. At the end of fiscal year 2004, the Inactive National Guard had 1,428 Army National Guard members. This subcategory contains individuals who are temporarily unable to participate in regular training but who wish to remain attached to their National Guard unit. Most reservists who were called to active duty for other than normal training after September 11, 2001, were mobilized under one of the three legislative authorities listed in table 1. On September 14, 2001, President Bush declared that a national emergency existed as a result of the attacks on the World Trade Center in New York City, New York, and the Pentagon in Washington, D.C., and he invoked 10 U.S.C. § 12302, which is commonly referred to as the “partial mobilization authority.” On September 20, 2001, DOD issued mobilization guidance that, among a host of other things, directed the services as a matter of policy to specify in initial orders to Ready Reserve members that the period of active duty service under 10 U.S.C. § 12302 would not exceed 12 months. However, the guidance allowed the service secretaries to extend orders for an additional 12 months or to remobilize reserve component members under the partial mobilization authority as long as an individual member’s cumulative service did not exceed 24 months under 10 U.S.C. § 12302. The guidance further specified that “No member of the Ready Reserve called to involuntary active duty under 10 U.S.C. 12302 in support of the effective conduct of operations in response to the World Trade Center and Pentagon attacks, shall serve on active duty in excess of 24 months under that authority, including travel time to return the member to the residence from which he or she left when called to active duty and use of accrued leave.” The guidance also allowed the services to retain members on active duty after they had served 24 or fewer months under 10 U.S.C. § 12302 with the member’s consent if additional orders were authorized under 10 U.S.C. § 12301(d). Combatant commanders are principally responsible for the preparation and implementation of operation plans that specify the necessary level of mobilization of reserve component forces. The military services are the primary executors of mobilization. At the direction of the Secretary of Defense, the services prepare detailed mobilization plans to support the operation plans and provide forces and logistical support to the combatant commanders. The Assistant Secretary of Defense for Reserve Affairs, who reports to the Under Secretary of Defense for Personnel and Readiness, is to provide policy, programs, and guidance for the mobilization and demobilization of the reserve components. The Chairman of the Joint Chiefs of Staff, after coordination with the Assistant Secretary of Defense for Reserve Affairs, the secretaries of the military departments, and the commanders of the Unified Combatant Commands, is to advise the Secretary of Defense on the need to augment the active forces with members of the reserve components. The Chairman of the Joint Chiefs of Staff also has responsibility for recommending the period of service for units and members of the reserve components ordered to active duty. The service secretaries are to prepare plans for mobilization and demobilization and to periodically review and test the plans to ensure the services’ capabilities to mobilize reserve forces and to assimilate them effectively into the active forces. Figure 1 shows reserve component usage on a per capita basis since fiscal year 1989 and demonstrates the dramatic increase in usage that occurred after September 11, 2001. It shows that the ongoing usage— which includes support for operations Noble Eagle, Enduring Freedom, and Iraqi Freedom—exceeds the usage rates during the 1991 Persian Gulf War in both length and magnitude. While reserve component usage increased significantly after September 11, 2001, an equally important shift occurred at the end of 2002. Following the events of September 11, 2001, the Air Force initially used the partial mobilization authority more than the other services. However, service usage shifted in 2002, and by the end of that year, the Army had more reserve component members mobilized than all the other services combined. Since that time, usage of the Army’s reserve component members has continued to dominate DOD’s figures. On January 19, 2005, more than 192,000 National Guard and Reserve members were mobilized. About 85 percent of these mobilized personnel were members of the Army National Guard or Army Reserve. Under the current partial mobilization authority, DOD increased not only the numbers of reserve component members that it mobilized, but also the length of the members’ mobilizations. The average mobilization for Operations Desert Shield and Desert Storm in 1990-1991 was 156 days. However, on March 31, 2004, the average mobilization for the three ongoing operations had increased to 342 days, and that figure was expected to continue to rise. DOD does not have the strategic framework and associated policies necessary to maximize reserve component force availability for a long-term Global War on Terrorism. The availability of reserve component forces to meet future requirements is greatly influenced by DOD’s implementation of the partial mobilization authority and by the department’s personnel policies. Furthermore, many of DOD’s policies that affect mobilized reserve component personnel were implemented in a piecemeal manner, and were focused on the short-term needs of the services and reserve component members rather than on long-term requirements and predictability. The availability of reserve component forces will continue to play an important role in the success of DOD’s missions because requirements that increased significantly after September 11, 2001, are expected to remain high for the foreseeable future. As a result, there are early indicators that DOD may have trouble meeting predictable troop deployment and recruiting goals for some reserve components and occupational specialties. On September 14, 2002, DOD broke with its previous pattern of addressing mobilization requirements with a presidential reserve call-up before moving to a partial mobilization. By 2004 DOD was facing reserve component personnel shortages and considered a change in its implementation of the partial mobilization authority. The manner in which DOD implements the mobilization authorities currently available can result in either an essentially unlimited supply of forces or running out of forces available for deployment, at least in the short term. DOD has used two mobilization authorities to gain involuntary access to its reserve component forces since 1990. In 1990, the President invoked Title 10 U.S.C. Section 673b, allowing DOD to mobilize Selected Reserve members for Operation Desert Shield. The provision was then commonly referred to as the Presidential Selected Reserve Call-up authority and is now called the Presidential Reserve Call-up authority. This authority limits involuntary mobilizations to not more than 200,000 reserve component members at any one time, for not more than 270 days, for any operational mission. On January 18, 1991, the President invoked Title 10 U.S.C. Section 673, commonly referred to as the “partial mobilization authority,” thus providing DOD with additional authority to respond to the continued threat posed by Iraq’s invasion of Kuwait. The partial mobilization authority limits involuntary mobilizations to not more than 1 million reserve component members at any one time, for not more than 24 consecutive months, during a time of national emergency. During the years between Operation Desert Shield and September 11, 2001, DOD invoked a number of separate mission-specific Presidential Reserve Call- up authorities for operations in Bosnia, Kosovo, Southwest Asia, and Haiti, and the department did not seek a partial mobilization authority for any of these operations. After the events of September 11, 2001, the President immediately invoked the partial mobilization authority without a prior Presidential Reserve Call- up. Since the partial mobilization for the Global War on Terrorism went into effect in 2001, DOD has used both the partial mobilization authority and the Presidential Reserve Call-up authorities to involuntarily mobilize reserve component members for operations in the Balkans. The manner in which DOD implements the partial mobilization authority affects the number of reserve component forces available for deployment. When DOD issued its initial guidance concerning the partial mobilization authority in 2001, it limited mobilization orders to 12 months but allowed the service secretaries to extend the orders for an additional 12 months or remobilize reserve component members, as long as an individual member’s cumulative service under the partial mobilization authority did not exceed 24 months. Under this cumulative implementation approach, it is possible for DOD to run out of forces during an extended conflict, such as a long-term Global War on Terrorism. During our 2003-2004 review of mobilization and demobilization issues, DOD was already facing some critical personnel shortages. At that time, to expand its pool of available personnel, DOD was considering a policy shift that would have authorized mobilizations under the partial mobilization authority of up to 24 consecutive months with no limit on cumulative months. Under the considered approach, DOD would have been able to mobilize its forces for less than 24 months, send them home, and then remobilize them, repeating this cycle indefinitely and providing essentially an unlimited flow of forces. After our review was complete, DOD said it would continue its implementation of the partial mobilization authority that limits mobilizations to a cumulative total of 24 months. However, DOD did not clarify how it planned to meet its longer-term requirements for the Global War on Terrorism as successive groups of reserve component personnel reach the 24-month mobilization point. DOD’s policies related to reserve component mobilizations were not linked within the context of a strategic framework to meet the force availability goals, and many policies have undergone significant changes. Overall, the policies reflected DOD’s past use of the reserve components as a strategic force, rather than DOD’s current use of the reserve component as an operational force responding to the increased requirements of the Global War on Terrorism. Faced with some critical personnel shortages, the policies focused on the short-term needs of the services and reserve component members, rather than on long-term requirements and predictability. Lacking a strategic framework containing human capital goals concerning reserve component force availability to guide its policies, OSD and the services made several changes to their policies to increase the availability of the reserve component forces. As a result of these changes, predictability declined for reserve component members. Specifically, reserve component members have faced uncertainties concerning the cohesion of their units, the likelihood of their mobilizations, the length of their service commitments, the length of their overseas rotations, the types of missions they would be asked to perform, and the availability of their equipment. The partial mobilization authority allows DOD to involuntarily mobilize members of the Ready Reserve, including the IRR; but after the President invoked the partial mobilization authority on September 14, 2001, DOD and service policies encouraged the use of volunteers and generally discouraged the involuntary mobilization of IRR members. DOD officials stated that they wanted to focus involuntary mobilizations on the paid, rather than unpaid, members of the reserve components. However, our prior reports documented the lack of predictability that resulted from the volunteer and IRR policies. Our August 2003 mobilization report showed that the policies were disruptive to the integrity of Army units because there had been a steady flow of personnel among units. Personnel were transferred from nonmobilizing units to mobilizing units that were short of personnel, and when the units that had supplied the personnel were later mobilized, they in turn were short of personnel and had to draw personnel from still other units. From September 11, 2001 to May 15, 2004, the Army Reserve mobilized 110,000 reservists, but more than 27,000 of these reservists were transferred and mobilized with units that they did not normally train with. In addition, our November 2004 report on the National Guard noted that between September 11, 2001, and July 2004, the Army National Guard had transferred over 74,000 personnel to deploying units. The reluctance to use the IRR is reflected in the differences in usage rates between Selected Reserve and IRR members. About 42 percent of the personnel who were members of Selected Reserve on November 30, 2004, had been mobilized since September 2001, compared to about 3 percent of the IRR members. Within the Army, use of the IRR had been less than 2 percent. Because the IRR makes up about one-quarter of the Ready Reserve, policies that discourage the use of the IRR will cause members of the Selected Reserve to share greater exposure to the hazards associated with national security and military requirements, and could cause DOD’s pool of available reserve component personnel to shrink by more than 276,000 personnel. At various times since September 2001, all of the services have had “stop-loss” policies in effect. These policies are short-term measures that increase the availability of reserve component forces while decreasing predictability for reserve component members who are prevented from leaving the service at the end of their enlistment periods. Stop-loss policies are often implemented to retain personnel in critical or high-use occupational specialties. The only stop-loss policy in effect when we ended our 2004 review of mobilization and demobilization issues was an Army policy that applied to units rather than individuals in critical occupations. Under that policy, Army reserve component personnel were not permitted to leave the service from the time their unit was alerted until 90 days after the date when their unit was demobilized. Because many Army units undergo several months of training after being mobilized but before being deployed overseas for 12 months, stop-loss periods can reach 2 years or more. According to Army officials, a substantial number of reserve component members have been affected by the changing stop-loss policies. As of June 30, 2004, the Army had over 130,000 reserve component members mobilized and thousands more alerted or demobilized less than 90 days. Because they have remaining service obligations, many of these reserve component members would not have been eligible to leave the Army even if stop-loss policies had not been in effect. However, from fiscal year 1993 through fiscal year 2001, Army National Guard annual attrition rates exceeded 16 percent, and Army Reserve rates exceeded 25 percent. Even a 16 percent attrition rate means that 20,800 of the mobilized 130,000 reserve component soldiers would have left their reserve component each year. If attrition rates exceed 16 percent or the thousands of personnel who are alerted or who have been demobilized for less than 90 days are included, the numbers of personnel affected by stop-loss policies would increase even more. When the Army’s stop-loss policies are eventually lifted, thousands of servicemembers could retire or leave the service all at once, and the Army’s reserve components could be confronted with a huge increase in recruiting requirements. Following DOD’s issuance of guidance concerning the length of mobilizations in September 2001, the services initially limited most mobilizations to 12 months, and most services maintained their existing operational rotation policies to provide deployments of a predictable length that are preceded and followed by standard maintenance and training periods. However, the Air Force and the Army later increased the length of their rotations, and the Army increased the length of its mobilizations as well. These increases in the length of mobilizations and rotations increased the availability of reserve component forces, but they decreased predictability for individual reserve component members who were mobilized and deployed under one set of policies but later extended as a result of the policy changes. From September 11, 2001, to March 31, 2004, the Air National Guard mobilized more than 31,000 personnel, and the Air Force Reserve mobilized more than 24,000 personnel. Although most Air Force mobilizations were for 12 months or less, more than 10,000 air reserve component members had their mobilization orders extended to 24 months. Most of these personnel were in security-related occupations. Before September 2001, the Army mobilized its reserve component forces for up to 270 days under the Presidential Reserve Call-up authority, and it deployed these troops overseas for rotations that lasted about 6 months. When it began mobilizing forces under the partial mobilization authority in September 2001, the Army generally mobilized troops for 12 months. However, troops that were headed for duty in the Balkans continued to be mobilized under the Presidential Reserve Call-up authority. The Army’s initial deployments to Iraq and Afghanistan were scheduled for 6 months, just like the overseas rotations for the Balkans. Eventually, the Army increased the length of its rotations to Iraq and Afghanistan to 12 months. This increased the availability of reserve component forces, but it decreased predictability for members who were mobilized and deployed during the transition period when the policy changed. When overseas rotations were extended to 12 months, mobilization periods, which must include mobilization and demobilization processing time, training time, and time for the reserve component members to take any leave that they earn, required a corresponding increase in length. DOD has a number of training initiatives under way that will increase the availability of its reserve component forces to meet immediate needs. Servicemembers are receiving limited training—called “cross-training”— that enables them to perform missions that are outside their area of expertise. In the Army, field artillery and air defense artillery units have been trained to perform some military police duties. Air Force and Navy personnel received additional training and are providing the Army with additional transportation assets. DOD also has plans to permanently convert thousands of positions from low-use career fields to stressed career fields. Because the combatant commander has required Army National Guard units to have modern, capable, and compatible equipment for recent operations, the Army National Guard adapted its units and transferred equipment to deploying units from nondeploying units. However, this has made equipping units for future operations more challenging. National Guard data showed that between September 2002 and June 2004, the Army National Guard had transferred more than 35,000 pieces of equipment to units that were deploying in support of operations in Iraq. The equipment included night vision goggles, machine guns, radios, chemical monitors, and vehicles. As a result, it has become increasingly challenging for the National Guard to ready later deploying units to meet warfighting requirements. While it remains to be seen how the uncertainty resulting from changing mobilization and personnel policies will affect recruiting, retention, and the long-term availability of the reserve components, there are already indications that some portions of the force are being stressed. For example, the Army National Guard achieved only 87 percent of its recruiting goals in both fiscal years 2003 and 2004, and in the first quarter of fiscal year 2005 it achieved only 80 percent of its goal. The Secretary of Defense established a force-planning metric to limit involuntary mobilizations to “reasonable and sustainable rates” and has set the metric for such mobilizations at 1 year out of every 6. However, on the basis of current and projected usage, it appears that DOD may face difficulties achieving its goal within the Army’s reserve components in the near term. Since February 2003, the Army has continuously had between 20 and 29 percent of its Selected Reserve members mobilized. To illustrate, even if the Army were to maintain the lower 20 percent mobilization rate for Selected Reserve members, it would need to mobilize one-fifth of its Selected Reserve members each year. DOD is aware that certain portions of the force are used at much higher rates than others, and it plans to address some of the imbalances by converting thousands of positions from lower-demand specialties into higher-demand specialties. However, these conversions will take place over several years, and even when the positions are converted, it may take some time to recruit and train people for the new positions. It is unclear how DOD plans to address its longer-term personnel requirements for the Global War on Terrorism, given its current implementation of the partial mobilization authority. Requirements for reserve component forces increased dramatically after September 11, 2001, and are expected to remain high for the foreseeable future. In the initial months following September 11, 2001, the Air Force used the partial mobilization authority more than the other services, and it reached its peak with almost 38,000 reserve component members mobilized in April 2002. However, by July 2002, Army mobilizations surpassed those of the Air Force, and since December 2002, the Army has had more reserve component members mobilized than all the other services combined. According to OASD/RA data, about 42 percent of DOD’s Selected Reserve forces had been mobilized from September 14, 2001, to November 30, 2004. Although many of the members who have been called to active duty under the partial mobilization authority have been demobilized, as of January 19, 2005, more than 192,000 of DOD’s reserve component members were still mobilized and serving on active duty, and DOD has projected that for the next 3 to 5 years it will have more than 100,000 reserve component members mobilized, with most of these personnel continuing to come from the Army National Guard or Army Reserve. While Army forces may face the greatest levels of involuntary mobilizations over the next few years, all the reserve components have career fields that have been highly stressed. For example, across the services, 82 percent of enlisted security forces have been called up since September 11, 2001. Our September 2004 report detailed Navy, Marine Corps, and Air Force career fields that have been stressed. In June 2004, DOD noted that about 30,000 reserve members had already been mobilized for 24 months. Under DOD’s cumulative approach, these personnel will not be available to meet future requirements under the current partial mobilization. The shrinking pool of available personnel, along with the lack of a strategic plan to clarify goals regarding the reserve component force’s availability, will present the department with additional short- and long-term challenges as it tries to fill requirements for mobilized reserve component forces. As the Global War on Terrorism stretches into its fourth year, DOD officials have made it clear that they do not expect the war to end soon. Furthermore, indications exist that certain components and occupational specialties are being stressed, and the long-term impact of this stress on recruiting and retention is unknown. Moreover, although DOD has a number of rebalancing efforts under way, these efforts will take years to implement. Because this war is expected to last a long time and requires far greater reserve component personnel resources than any of the smaller operations of the previous two decades, DOD can no longer afford individual policies that are developed to maximize short-term benefits and must have an integrated set of policies that address both the long-term requirements for reserve component forces and individual reserve component members’ needs for predictability. For example, service rotation policies are directly tied to other personnel policies, such as policies concerning the use of the IRR and the extent of cross training. Policies to fully utilize the IRR would increase the pool of available servicemembers and would thus decrease the length of time each member would need to be deployed, based on a static requirement. Policies that encourage the use of cross-training for lesser-utilized units could also increase the pool of available servicemembers and decrease the length of rotations. Until DOD addresses its personnel policies within the context of an overall strategic framework, it will not have clear visibility over the forces that are available to meet future requirements. In addition, it will be unable to provide reserve component members with clear expectations of their military obligations and the increased predictability that DOD has recognized is a key factor in retaining reserve component members who are seeking to successfully balance their military commitments with family and civilian employment obligations. In our previously published reports, we made several recommendations aimed at increasing the long-term availability of reserve component forces. In particular, we recommended that DOD develop a strategic framework that sets human capital goals concerning the availability of its reserve force to meet the longer-term requirements of the Global War on Terrorism, and we recommended that DOD identify policies that should be linked within the context of the strategic framework. DOD generally agreed with our recommendations concerning long-term availability of reserve component forces. For answers to questions about this statement, please contact Derek B. Stewart at (202) 512-5140 or [email protected] or Brenda S. Farrell at (202) 512-3604 or [email protected]. Individuals making key contributions to this statement included Michael J. Ferren, Kenneth E. Patton, and Irene A. Robertson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense (DOD) has six reserve components: the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Naval Reserve, and the Marine Corps Reserve. DOD's use of Reserve and National Guard forces increased dramatically following the events of September 11, 2001, and on January 19, 2005, more than 192,000 National Guard and Reserve component members were mobilized. About 85 percent of these personnel were members of the Army National Guard or the Army Reserve. Furthermore, the availability of reserve component forces will continue to play an important role in the success of DOD's future missions, and DOD has projected that over the next 3 to 5 years, it will continuously have more than 100,000 reserve component members mobilized. Since September, 2001, GAO has issued a number of reports that have dealt with issues related to the increased use of Reserve and National Guard forces. For this hearing, GAO was asked to provide the results of its work on the extent to which DOD has the strategic framework and policies necessary to maximize reserve component force availability for a long-term Global War on Terrorism. DOD does not have a strategic framework with human capital goals concerning the availability of its reserve component forces. The manner in which DOD implements its mobilization authorities affects the number of reserve component members available. The partial mobilization authority limits involuntary mobilizations to not more than 1 million reserve component members at any one time, for not more than 24 consecutive months, during a time of national emergency. Under DOD's current implementation of the authority, members can be involuntarily mobilized more than once, but involuntary mobilizations are limited to a cumulative total of 24 months. Given this implementation, DOD could eventually run out of forces. During GAO's 2004 review, DOD was facing shortages of some reserve component personnel, and officials considered changing their implementation of the partial mobilization authority to expand the pool of available personnel. Under the proposed implementation, DOD could have mobilized personnel for less than 24 consecutive months, sent them home for a period, and remobilized them, repeating this cycle indefinitely and providing an essentially unlimited flow of forces. After GAO's review was done, DOD said it would retain its current implementation that limits mobilizations to a cumulative total of 24 months. However, DOD did not clarify how it planned to meet its longer-term requirements for the Global War on Terrorism as additional forces reach the 24-month mobilization point. By June 2004, 30,000 reserve component members had already been mobilized for 24 months. DOD's policies also affect the availability of reserve component members. Many of the policies that affect reserve component availability were focused on the services' short-term requirements or the needs of individual service members rather than on long-term requirements and predictability. For example, DOD implemented stop-loss policies, which are short-term measures that increase force availability by retaining active or reserve component members on active duty beyond the end of their obligated service. Because DOD's various policies were not developed within the context of an overall strategic framework, they underwent numerous changes as DOD strove to meet current requirements, and they did not work together to meet the department's long-term Global War on Terrorism requirements. These policy changes created uncertainties for reserve component members concerning the likelihood of their mobilization, the length of service commitments and overseas rotations, and the types of missions they will have to perform. The uncertainties may affect future retention and recruiting efforts, and indications show that some parts of the force may already be stressed. |
According to USPS data, in fiscal year 2013, USPS delivered mail to a network of more than 150 million delivery points, including 133 million delivery points served by more than 230,000 career carriers that comprised almost half of the career workforce. In addition, USPS employed more than 75,000 non-career carriers who were mostly part- time employees, such as substitute carriers on rural routes. Mail delivery is USPS’s largest cost area, comprising 41 percent of total costs in fiscal year 2013. The extent to which USPS uses each mode of delivery and the mode’s associated costs play a substantial role in USPS’s overall financial condition. USPS uses three basic modes for residential and business mail delivery: Door delivery includes delivery to mail slots in the door as well as mailboxes attached to houses and delivery made to businesses such as delivery to mail slots in the door, mailboxes attached to the business near the door, or locations within office buildings. Door delivery is the most costly because the carrier must walk from door to door, which is often the case on foot routes or park-and-loop routes. Curbline delivery (also referred to as curbside delivery) includes delivery to curbline mailboxes that are commonly used on routes serving residential customers, such as those living in rural and suburban areas. Curbline mailboxes are typically unlocked mail receptacles on a post. USPS regulations require curbline mailboxes to be located at the curb where they can be efficiently, safely, and conveniently served by the carrier from the carrier’s vehicle, and so that customers have reasonable and safe access. Curbline delivery is less costly than door delivery, as it takes less time for the carrier to move between curbline mailboxes, particularly when the carrier can load mail into the mailbox directly from the delivery vehicle. Centralized delivery is provided to centrally-located mail receptacles, such as apartment house mailboxes and cluster box units. Both wall mounted and cluster box units can be installed to service both residential and business delivery. Cluster boxes are generally pedestal-mounted units located outdoors with individually locked mail receptacles for each delivery point. Cluster boxes and apartment house mailboxes have become more secure over time, as USPS developed and implemented regulations for higher security standards for manufacturers. The newer Cluster Box Units (CBUs) also include locked receptacles for parcels and outgoing mail, unlike older Neighborhood Delivery and Cluster Box Units (NDCBU). This mode of delivery is also less costly than door delivery as it takes less time to service a cluster box than to walk from door to door, particularly when the carrier can drive between cluster boxes. Figure 1 shows the types of mail receptacles commonly used for each major mode of delivery. According to USPS data, in fiscal year 2013 about 41 percent of existing delivery points received curbline delivery, about 30 percent received centralized, and about 28 percent had other modes, which primarily consist of door delivery. These percentages have changed little over the past 5 years (see fig. 2). From fiscal years 2008 to 2013, the total number of door delivery points declined by about 308,000—including about 287,000 residential door delivery points, and about 21,000 business door delivery points—leaving about 32.2 million residential door delivery points and about 5.6 million business delivery points (see fig. 3). According to a USPS official, these changes mostly reflect redevelopment, such as replacement of older homes that had door delivery with new apartment buildings with centralized delivery, and new business developments such as office parks and strip malls. While the number of door delivery points declined by 1.2 percent from fiscal years 2008 to 2013, the number of curbline and centralized delivery points—the primary modes of delivery—increased by 0.1 percent and 1.1 percent respectively. From fiscal years 2008 to 2013, the number of centralized delivery points increased by 2.8 million, while the number of curbline delivery points increased by 1.9 million. See appendix II for details on the number of delivery points for each mode in fiscal years 2008 through 2013. USPS is required to provide prompt, reliable, and efficient services to patrons in all areas. USPS has the flexibility to revise its regulations to convert delivery points from more costly to less costly modes of delivery, and postal statutory provisions provide that USPS is required to fulfill its mission by operating an efficient delivery network. USPS is specifically required to plan, develop, promote, and provide adequate and efficient postal services at fair and reasonable rates and fees. USPS estimates of delivery mode costs and potential savings from converting to less costly modes have limitations because they rely on cost estimates and data from a 1994 USPS study. USPS increased these 1994 cost estimates for each mode of delivery by 55 percent, based on the total percent change in the Consumer Price Index for All Urban Consumers (CPI-U) from fiscal year 1994 to 2012, which may not have been the same as changes in USPS delivery costs. USPS estimates updated through fiscal year 2012 based on the 1994 data show door delivery costs greatly exceed costs for other modes of delivery. USPS officials stated that although many aspects of postal operations have changed over the past 20 years, the manner in which a carrier delivers mail on the street has changed little. USPS’s estimated costs of door delivery were about 160 percent higher than curbline delivery and estimated door delivery costs were more than double that of centralized delivery. Based on the differences in delivery mode costs, USPS estimates that it could realize large savings from large-scale mandatory conversions of both residential and business delivery points from costly door delivery. We determined that these were the only data available and have limitations for estimating delivery costs and potential savings. However, these estimates may not be the best source to inform decisions about conversion approaches, as estimates based on updated data may yield differing results. USPS estimates of delivery mode costs and potential savings from converting to less costly modes have limitations because they rely on cost estimates and data from a 1994 USPS study.on the time postal employees used to prepare and deliver mail for each mode of delivery. These data were then combined with postal wage and benefit cost data, as well as other data (e.g., delivery vehicle costs), to estimate the costs for each mode of delivery. In lieu of current data, USPS increased these 1994 cost estimates for each mode of delivery by The study collected data 55 percent, based on the total percent change in the Consumer Price Index for All Urban Consumers (CPI-U) from fiscal year 1994 to 2012. (See app. III for further details on USPS’s methodology for conducting this study.) Because CPI-U is a measure of inflation for the U.S. economy, changes in CPI-U over this period of time may not have been the same as changes in USPS delivery costs, which are affected by factors such as postal wage rates, postal benefit costs, and gasoline prices. In fact, according to USPS officials, key delivery-related costs increased more than inflation from 1994 to 2012. These cost increases may have been offset by gains in postal productivity, such as automated mail sorting by delivery sequence, which reduces the amount of carrier time needed for manual sorting,estimates are generally understated or overstated. so it is unclear whether USPS’s Another potential weakness in the estimates is USPS’s application of the same 55 percent increase in the 1994 data for the cost of each delivery mode—a method that assumes that the cost for each delivery mode increased at the same rate from fiscal years 1994 to 2012. Available evidence suggests this assumption may not be correct. According to USPS, since the original study was conducted, it has adopted work rules that disproportionately increase the cost of door delivery. For example, to comply with current collective bargaining agreement work rules, city postal carriers must manually collate some advertising mailing before loading it into satchels to carry on foot routes and park-and-loop routes, This work rule does which are largely door delivery, according to USPS. not apply to motorized routes, such as curbline routes where carriers load mailboxes from the delivery vehicle. In addition, to the extent that some modes of delivery are more labor intensive than others, the actual increase in USPS wage and benefit costs from fiscal years 1994 to 2012 may have affected the costs of some delivery modes more than others. The work rule specifies that city letter carriers on foot routes and park-and-loop routes will not be required to carry more than three bundles of mail pursuant to USPS’s collective bargaining agreement with the National Association of Letter Carriers, which represents city carriers. Government auditing standards state that managers are responsible for providing reliable, useful, and timely information for transparency and accountability of programs and their operations. Legislators, oversight bodies, and the public need to know whether or not government services are provided effectively, efficiently, and economically. This standard is particularly relevant because pending postal reform legislation in the House and Senate would mandate the conversion of some delivery points from door delivery to centralized or curbline delivery. Furthermore, the administration’s budget for fiscal year 2015 also proposes “allowing the Postal Service to begin shifting to centralized and curbside delivery where appropriate.” Internal controls for federal agencies state that financial information is needed to support operating decisions, monitor performance, and allocate resources. Without such information on costs of modes and on potential savings through delivery conversions, USPS and lawmakers may not have an accurate understanding of the impact of delivery mode changes on which to base their decisions. USPS officials told us that its estimates of delivery mode costs and potential savings “have validity” despite the use of the inflation adjustment in lieu of updated data. USPS officials stated that although many aspects of postal operations have changed over the past 20 years, the manner in which a carrier delivers mail on the street has changed little. However, USPS officials also acknowledge the weakness of USPS’s delivery mode data. USPS estimates the cost of a new delivery study using ongoing operational data at $75,000 to $100,000, and the cost of a more extensive study collecting new data at $250,000 to $750,000. Mail delivery represents the largest cost area relative to USPS’s annual expenses of approximately $72 billion, and updating the 1994 study would be relatively low cost compared to those expenses. Without a current delivery cost study, USPS may be less able to determine accurate cost savings from various delivery mode conversion scenarios. USPS estimates updated through fiscal year 2012 based on the 1994 data show door delivery costs greatly exceed costs for other modes of delivery. Estimated costs of door delivery were about 160 percent higher than curbline delivery and estimated door delivery costs were more than double that of centralized delivery. Specifically, USPS estimated that its delivery costs in fiscal year 2012 ranged from about $380 annually for the average door delivery point to about $240 for curbline delivery and about $170 for centralized delivery such as cluster boxes and apartment house mailboxes (see fig. 4). Based on the differences in delivery mode costs, USPS provided us with estimates showing that it could realize large savings from large-scale mandatory conversions of both residential and business delivery points from costly door delivery even as it continues to add new delivery points every year. For example, there were about 770,000 new delivery points added in fiscal year 2013, an increase of about half of 1 percent from fiscal year 2012 levels. Specifically, USPS estimates that potential ongoing annual savings exceeding $2 billion could be achieved by mandatory conversion of 12.2 million door delivery points over the next decade to a mix of centralized and curbline boxes (see fig. 5). This level of conversion is about one-third of the 38 million door delivery points and would still provide future opportunities to realize savings from additional conversions. About 85 percent of these conversions would be residential and 15 percent would be business delivery points. Based on USPS’s schedule for these conversions, the potential savings would be realized in the first full fiscal year after full implementation—fiscal year 2024—and every following fiscal year. USPS also estimated proportionately smaller savings from less extensive mandatory conversions (see fig. 5). We determined that these were the only data available and have limitations for estimating delivery costs and potential savings. However, these estimates may not be the best source to inform decisions about conversion approaches, as estimates based on updated data may yield differing results. According to USPS officials, its estimates are based on what they could reasonably accomplish with a deliberate pace of mandatory conversions that would be feasible for postal operations and customers. USPS officials told us that this pace—converting up to 1.5 million delivery points annually—would enable them to realign delivery routes to achieve ongoing savings from modes of delivery that are less labor intensive. Based on this pace, USPS officials said they could achieve sufficient savings to recoup costs to buy and install cluster boxes within the same year. To understand how USPS estimates it could realize these savings, it is important to understand how the changes would reduce USPS’s workload and how this would translate into lower USPS costs. Conversion of delivery points from door to curbline and centralized delivery would reduce the time required to organize and deliver the mail. Motorized routes with centralized and curbline delivery require less of a carrier’s time than walking from door to door. Reducing carrier workload through mandatory conversions could enable USPS to reorganize delivery into smaller numbers of routes, with each route including a larger number of delivery points. The resulting decrease in the number of routes could help reduce the number of carriers needed to fulfill delivery needs. USPS has historically reduced its workforce through attrition and has no-layoff provisions in its collective bargaining agreements with its four major postal unions. In this regard, large numbers of career carriers are expected to retire in the coming years. In addition to realigning delivery routes, our prior work has found that USPS can use established work methods for accomplishing delivery to a given geographic area with fewer carriers. reducing overtime as well as the number of hours worked by carriers with flexible schedules. In this regard, most door delivery is made to city delivery routes served by city carriers. The average hourly wage and benefit costs of all city carriers, including career and non-career employees, exceeds $41 per hour, according to USPS data. For example, USPS often divides up an unstaffed route among multiple carriers who each cover a part of this route in addition to their regular route—a work method that is used to augment the work of some carriers with less than 8 hours of workload on their route. See GAO-09-696. According to USPS officials, conversions from door delivery would also decrease the time and costs associated with organizing mail for delivery. For example, the organization of mail for delivery would take less time as it would no longer involve strapping bundles of mail to help keep it organized for carriers. The officials also said that, in cases in which delivery mode conversion enabled a foot route to convert to a motorized route, mail would no longer be prepositioned along those routes. Additionally, USPS could avoid manual handling of some advertising mail for routes converted to motorized delivery. In April 2012, USPS updated its policy regarding assigning delivery modes to new addresses. USPS revised its Postal Operations Manual—a —to regulation of the USPS pursuant to the Code of Federal Regulationsspecify that USPS determines the mode of delivery for new addresses. According to USPS officials, USPS used to provide for customer preferences as a factor in deciding on the mode of delivery. The revised Postal Operations Manual states that new business addresses must receive centralized delivery unless USPS approves an exception, and the modes approved for new residential delivery are curbline delivery, centralized delivery, and sidewalk delivery, unless an exception is granted or the new address area is a continuation of an existing block. The manual continues to provide that customers can request changes to their mode of delivery on a hardship basis, which USPS considers “where service by existing methods would impose an extreme physical hardship on an individual customer.”USPS to implement delivery mode conversions on a mandatory basis. Instead USPS is required to obtain customer signatures to document that a delivery mode conversion is made on a voluntary basis before the delivery mode is changed. Finally, the manual does not authorize In 2013, USPS implemented voluntary conversion for businesses as part of its Delivering Results, Innovation, Value and Efficiency (DRIVE) initiative. In this effort, USPS provides conversion goals to its field officials who then identify specific businesses as candidates for conversion from door delivery to centralized delivery. USPS field officials told us that strip malls and high-rise office buildings receiving door delivery are good candidates for conversion because these delivery routes are labor intensive and the facilities could have suitable space to install a centralized mail receptacle. For example, in Chicago, USPS converted a high-rise office building, with mail delivered individually to all 50 tenant suites and twice a day mail pick-up, to centralized delivery and pickup. However, the DRIVE program has not converted as many businesses from door delivery as originally expected, and large savings are not likely due to the low number of conversions that have occurred. According to a USPS official, USPS set an overall goal for the total number of conversions for each fiscal year starting with fiscal year 2013. For the first year, USPS set a nationwide goal of voluntary conversions of 279,718 of the approximately 5.6 million business door delivery points to centralized delivery. USPS achieved 43,333 such conversions, about 15 percent of its goal—or about 0.8 percent—of the about 5.6 million business door delivery points. USPS officials explained that based on the fiscal year 2013 results, they set a lower goal of 34,652 voluntary business conversions for fiscal year 2014. They reported 11,488 conversions in the first quarter—about 33 percent of their goal. This initiative is solely focused on business conversions, and USPS has not set any goals for converting residential door delivery points to different modes of delivery. USPS did achieve some voluntary residential conversions in fiscal year 2013. USPS reported 36,302 out of about 32.2 million residential door delivery points—or about 0.1 percent—were converted to centralized delivery. As stated above, USPS’s potential savings estimates show that achieving large savings would require large-scale door delivery conversions. However, according to USPS officials, USPS has been reluctant to mandate conversions. Under the voluntary conversion process, customers on a route may choose to maintain door delivery, or a high number of customers may request and receive hardship exemptions for elderly persons or those with special needs, which would allow them to keep door delivery service, thus reducing the number of conversions and lowering potential savings. Furthermore, field officials we spoke with said that voluntary conversions are time consuming and labor intensive due to the amount of direct outreach and follow-up required, and a short-term cost increase may result from undertaking these efforts to generate long- term savings. Large-scale mandatory conversions have the potential to achieve large savings, but USPS faces impediments, such as customer inconvenience and safety and security concerns. USPS officials and several mailing industry stakeholders we spoke with told us that many postal customers are resistant to service changes, especially changes that might inconvenience them. USPS officials told us that some of the concerns could be addressed though hardship exceptions to continue door delivery. Stakeholders also said that service changes would particularly affect city letter carriers, by reducing the total number of carrier work hours and associated routes.concerns could diminish as they become accustomed to the new service. However, USPS officials stated some customer Among the impediments to increased use of less costly modes were concerns raised about personal safety and mail security. Several mailing industry stakeholders we met with identified the placement of CBUs in convenient, well lit and secure areas as a means to ensure customer safety when accessing mail, especially in higher crime areas. USPS officials said they take placement of centralized delivery locations into consideration, as both carriers and customers are affected when a CBU is placed in an improper or unsafe location. For example, in Chicago, USPS officials said they converted existing door delivery in some areas of the city to centralized delivery to address the personal safety concerns of residents and mail carriers generated from increased crime rates in those areas of the city. Some stakeholders also noted mail security issues, such as the potential for increases in theft of mail from people picking up their mail or break-ins to centralized mail receptacles. Others indicated that there are opportunities to increase mail security by converting the mode of delivery from unlocked mailboxes, like those affixed near doors or many curbline mail boxes, to CBUs with locked receptacles for mail and parcels. USPS officials told us that the new model CBUs are more secure than the old model NDCBUs, which are considered legacy equipment. The materials used to construct CBUs combined with design changes have improved the security of mail delivered to them. Although USPS engineering officials told us they test the security of each type of cluster box and provide some guidance on door and curbline mail receptacles, no data are available on the relative security of each receptacle type once they are put in place and in use. Data are collected on mail theft and convictions for mail theft and other mail security violations, but these data do not generally specify the type of mail receptacle involved in a mail theft or related incident. We met with officials of the U.S. Postal Inspection Service, which collects complaint data on lost or stolen mail, and they told us that while there are fields on the online form for mail receptacle type, these fields usually are not completed by the mail customer reporting the theft. Customers can also call a USPS help line or go to a post office in-person to report mail loss or theft, and often in those cases the mail receptacle data are not collected. U.S. Postal Inspection Service officials said that for their investigative purposes, the complaint does not need to include the receptacle type in order to pursue cases of lost or stolen mail. See appendix IV for available data on convictions for mail theft and other mail security violations. Finally, according to USPS officials, there are places, such as densely populated urban areas, where it is difficult to find suitable locations for CBUs. This could make converting existing door delivery to centralized or curbline boxes that meet both customer needs and USPS requirements difficult. Field officials we spoke with in Washington, D.C., and Los Angeles, California, noted this concern. For example, a field official in Washington, D.C. noted that with extensive street parking and limited common space, it can be very difficult to find a suitable location for CBU units. A USPS official noted that if public space was not available for a CBU on a delivery route, gaining access rights to install one requires working with local governments and can be cumbersome. He said that easements or other property access rights vary across localities and regions and some neighborhoods would not have sufficient or convenient space available for installation of CBUs. However, as shown in fig. 1, CBUs and other centralized delivery methods have a range of types and sizes that could be used to accommodate the available footprint and placement options. USPS cost estimates for each mode of delivery and potential savings from converting door delivery points to other delivery modes are based on a 20-year-old study and are of questionable accuracy. Although USPS extrapolated fiscal year 1994 delivery mode costs to fiscal year 2012 by applying inflationary increases to the cost of each delivery mode, it is unclear whether this extrapolation reflects the actual change in postal costs, such as changes in carrier wage and benefits, which increased faster than inflation over this period. Because USPS productivity also increased, the accuracy of USPS’s fiscal year 2012 cost estimates is unknown. USPS officials acknowledge the weakness of USPS’s delivery mode data. We believe that accurate data are needed for Congress, USPS, and other stakeholders to understand the cost-saving potential for delivery mode conversions as compared with other savings options to improve USPS’s financial viability. This is particularly relevant because pending postal reform legislation would mandate the conversion of some delivery points from costly door delivery to other modes. USPS’s estimates of the cost of a new delivery study using ongoing operational data at $75,000 to $100,000, or a more extensive study at $250,000 to $750,000 do not seem cost prohibitive given USPS’s annual expenses of about $72 billion. Thus, we conclude that the benefits of obtaining current, reliable data on delivery mode costs would make the cost of doing so a worthwhile investment. To improve information needed for USPS and congressional decision making as well as transparency for all stakeholders, we recommend that the Postmaster General and USPS’s executive leaders collect and analyze updated data on delivery mode costs and the potential savings from converting delivery points to less costly modes of delivery and establish a time frame for publicly reporting the results. We provided a draft of this report to USPS for review and comment. USPS provided comments, which are reprinted in appendix V. USPS concurred with our recommendation and agreed that steps should be taken to improve the accuracy and reliability of delivery mode cost estimates and potential savings from converting delivery points to less costly modes of delivery. Further, USPS plans to initiate efforts to determine how to most efficiently capture cost data, create a project plan, and determine a timeline to produce results. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix VI. To discuss the costs and potential savings associated with converting to less costly delivery modes, we obtained available U.S. Postal Service (USPS) data on the cost of each delivery mode from fiscal years 1994 through 2012, the most recent year for which data are available. We obtained USPS documentation on the assumptions and methodology used in developing these cost data. This documentation included a USPS report of the 1994 USPS study that measured the costs of each mode of delivery and the USPS methodology for increasing these costs by inflation to estimate delivery mode costs from fiscal years 1995 through 2012. We requested that USPS officials generate estimates, similar to those they created for Congress and USPS internal use, of the potential savings from increasing its reliance on less costly modes of delivery. We reviewed documentation of the assumptions, methodology, calculations, and underlying data for these estimates. We had extensive discussions with USPS officials regarding delivery mode cost and savings data, and obtained detailed written responses to follow-up questions regarding their reliability. We identified concerns with the quality of available USPS delivery mode cost and savings data. We interviewed USPS officials about the methodology used to develop the estimates and determined that these were the only data available and found the data to have limitations for estimating delivery costs and potential savings, which we discuss in the report. Further, we reviewed whether there were any opportunities to improve the quality of these data. USPS officials provided detailed written responses that identified such opportunities, including the scope, methodology, and estimated costs of a new USPS study to measure delivery mode costs, and USPS’s views on the merits of such a study. To discuss USPS’s actions to convert delivery modes to less costly modes and any impediments to conversion, we reviewed pertinent USPS documentation, such as information on the Delivering Results, Innovation, Value and Efficiency (DRIVE) initiative, and interviewed USPS delivery operations officials and field officials, on USPS’s efforts to promote voluntary conversion of some business addresses to less costly modes of delivery. We also obtained written responses from USPS officials and interviewed them on policies, regulations, and procedures that govern decisions when adding new delivery points and converting existing delivery points to a different mode of delivery. These interviews included USPS headquarters officials in the Washington, D.C., area and USPS field officials in Washington, D.C.; Chicago, IL; Seattle, WA; Los Angeles, CA; and Dallas and Coppell, TX. We selected these field locations to reflect different population densities and climates, near GAO headquarters and field offices, and geographically dispersed in five of the seven USPS areas.use of less costly delivery modes, we obtained USPS data on conversions of delivery points to less costly modes of delivery for fiscal year 2013 and the first quarter of fiscal year 2014, the only periods for which national data were available. We also obtained and reviewed USPS data on the number of delivery points for each mode of delivery for fiscal years 2008 through 2013. We obtained written responses regarding the reliability of these data, analyzed their consistency with other USPS data, and determined that they were sufficiently reliable for the purposes of this report. To identify any impediments to delivery mode conversions, we obtained written USPS responses, including some made in recent public proceedings, and interviewed USPS headquarters and field officials. In addition, we interviewed postal stakeholders, including postal unions representing city and rural letter carriers. We also interviewed 11 organizations representing different groups that use the mail in their business including Alliance of Nonprofit Mailers, American Catalog Mailers Association, Association for Magazine Media, Association of Postal Commerce, Direct Marketing Association, Greeting Card Association, Major Mailers Association, National Newspaper Association, Newspaper Association of America, Parcel Shippers Association, and Saturation Mailers Coalition. We also met with U.S. Postal Inspection Service officials to obtain information on mail security issues and discussed mail and identity theft generally, including complaints made to the Inspection Service in these areas. We obtained data on convictions for offenses related to mail theft maintained by the U.S. Postal Inspection Service and the USPS Office of Inspector General, obtained written and oral responses regarding data reliability, and determined the data were sufficiently reliable for the purposes of this report. Cluster boxes include Cluster Box Units and Neighborhood Delivery and Cluster Box Units. Other centralized delivery includes delivery to centralized locations such as apartment house mailboxes. Door delivery/other includes door delivery to mail slots, mailboxes near the door, and locations within office buildings, as well as other locations, such as along sidewalks. Most door delivery/other points receive door delivery, according to USPS officials. All other delivery points include various types of Post Office Boxes, delivery points where a university or other entity then provides final delivery to the recipient, and delivery points where the recipient picks up mail in bulk quantities such as remittance mail. Cluster boxes include Cluster Box Units and Neighborhood Delivery and Cluster Box Units. Other centralized delivery includes delivery to centralized locations such as apartment house mailboxes. Door delivery/other includes door delivery to mail slots, mailboxes near the door, and locations within office buildings, as well as other locations, such as along sidewalks. Most door delivery/other points receive door delivery, according to USPS officials. All other delivery points include various types of Post Office Boxes, delivery points where a university or other entity then provides final delivery to the recipient, and delivery points where the recipient picks up mail in bulk quantities such as remittance mail. USPS estimates of delivery mode costs rely on a 1994 study of delivery mode costs, which is the most recent study that USPS has conducted in this area. USPS continues to rely on this study to estimate the potential for savings through the option of converting addresses to less costly modes of delivery. According to a USPS report on the 1994 study, USPS employees and retired postal managers served as data collectors who, through direct observation, recorded the actual time to deliver mail for each mode of delivery from July 5, 1994 to September 9, 1994 on 735 USPS also estimated the time randomly selected city delivery routes.required for carriers to organize their mail for each mode of delivery. The cost of delivery was computed by factoring in the compensation costs associated with delivery times, delivery vehicle costs, as well as the costs to purchase, install, and maintain cluster boxes. USPS subsequently applied an inflation adjustment to the 1994 data to increase the cost for each delivery mode according to the annual change in the Consumer Price Index for All Urban Consumers (CPI-U), which produced cost estimates for fiscal years 1995 through 2012 (see fig. 5 in report). This inflation adjustment increased all delivery mode costs by the same percentage. (18 U.S.C. § 1705). According to the USPS Office of Inspector General, where federal prosecution is declined and state or local prosecution is sought, charges can include variations of larceny/theft, embezzlement, and misuse of public position/public trust based on the jurisdiction. In addition to the individuals named above, Amelia Shachoy, Assistant Director; Derrick Collins; Geoffrey Hamilton; Kenneth John; Joshua Ormond; Amy Rosewarne; Kelly Rubin; and Betsey Ward-Jenks made key contributions to this report. | USPS is expected to provide prompt, reliable and efficient nationwide service while remaining self-supporting, but it is facing serious fiscal challenges with insufficient revenues to cover its expenses. Mail delivery is USPS's largest cost area, totaling about $30 billion annually. Although USPS lacks authority to make certain changes that could reduce costs, it does have the authority to convert from more expensive to less expensive delivery modes. GAO was asked to examine potential cost savings and issues related to delivery conversion. This report discusses: (1) the estimated costs of each delivery mode and potential savings associated with converting to less costly modes and (2) USPS actions to convert to less costly delivery modes and any impediments to conversions. GAO obtained and analyzed USPS estimates from fiscal years 1994 through 2012 on delivery mode costs as well as potential savings from conversions to less costly modes and determined that the estimates have limitations, which we discuss in the report. GAO also interviewed officials from USPS and mailing industry stakeholders. The U.S. Postal Service (USPS) estimates of delivery mode costs and potential savings from converting to less costly modes show that door-to-door delivery is much more costly than delivery to a curbside or centralized mailbox and that USPS could achieve large savings by mandating large-scale conversions from door delivery to other modes. For fiscal year 2012, USPS estimated average annual costs of about $380 per delivery point for door delivery, compared with about $240 for delivery to the curb, and about $170 for delivery to a central location. USPS also estimated potential ongoing savings of over $2 billion annually from mandating conversion of about one-third of door deliveries to other modes. However, USPS's estimates of these specific costs and savings have limitations, in part because they rely on data from a 1994 USPS study. In lieu of current data, USPS adjusted the 1994 data according to increases in the Consumer Price Index—an adjustment that may not have been the same as changes in USPS delivery costs, which are affected by factors such as increases in postal wage rates, postal benefit costs, and gasoline prices. USPS officials estimate a new study could be conducted to replace the 1994 study for a total of about $100,000 to $750,000, depending on the extent of the study. Without current information on costs of delivery modes and on potential savings through delivery conversions, USPS and lawmakers may not have an accurate understanding of the impact of delivery mode changes on which to base their decisions. USPS has taken some actions to shift door deliveries to less costly delivery modes on a voluntary basis, but it faces stakeholder resistance and other impediments to mandatory conversions. USPS revised its regulations in April 2012 specifying that USPS determines the mode of delivery for new addresses and that new addresses must receive less costly modes, such as centralized delivery, unless USPS approves an exception. Additionally, USPS implemented voluntary business conversions in fiscal year 2013. USPS reported that 43,333 out of about 5.6 million business door delivery points—or about 0.8 percent—were voluntarily converted in fiscal year 2013. USPS has set a modest goal of about 35,000 additional voluntary business conversion goals for fiscal year 2014. USPS also converted about 36,302 out of about 32.2 million residential door delivery points—or about 0.1 percent—to centralized delivery on a voluntary basis in fiscal year 2013. Through the voluntary conversion process, customers on a route may choose to maintain door delivery, reducing the number of conversions and lowering potential savings. Large-scale mandatory conversions have the potential to achieve large savings. However, USPS is reluctant to mandate conversions. There is some evidence that USPS would face resistance from customers, USPS employees, and mailing industry stakeholders if it were to implement mandatory conversion of delivery to less costly modes. Stakeholder concerns include personal safety, mail security, and difficulty finding suitable urban locations for boxes to deliver mail to a curbside or centralized location. GAO recommends that USPS collect updated data on delivery mode costs and the potential savings of converting to less costly modes of delivery and establish a time frame for publicly reporting the results. USPS agreed with the recommendation. |
FAA currently employs almost 20,000 employees to operate and manage the nation’s air traffic control system. Most of these employees (about 15,250) are air traffic control specialists, or controllers, who are responsible for controlling the takeoff, landing, and ground movement of planes and are assigned to field facilities. (NATCA represents these controllers.) In addition, about 4,500 managers, supervisors, and staff specialists within FAA’s Air Traffic Services work to oversee and administer the air traffic control program. (About 3,900 of these 4,500 managers, supervisors, and specialists work in the various field facilities around the country and the other 600 provide management, direction, and oversight, as well as overall support, of the air traffic control system at headquarters and regional locations.) For this report, we focused our analysis on these two groups in FAA’s occupational job series 2152, which we refer to as controllers and managers, respectively. In 1994, Congress directed the Secretary of Transportation to undertake a study of management, regulatory, and legislative reforms that would enable FAA to provide better air traffic control services. FAA’s resulting 1995 report to Congress stated that existing federal personnel rules and procedures limited FAA’s ability to attract and retain qualified staff at key facilities or to reassign employees in response to changing needs. The report also stated that exemption from federal personnel regulations would provide FAA with the flexibility to hire, reward, and relocate employees to better manage the air traffic control system. On November 15, 1995, Congress directed the FAA Administrator to develop and implement a new personnel management system to provide greater flexibility in the hiring, training, compensation, and location of personnel. The 1996 Department of Transportation Appropriations Act exempted FAA from most provisions of title 5 of the United States Code and other federal personnel laws. On April 1, 1996, FAA introduced a set of new personnel policies and procedures that included, among other things, personnel reforms for locating its workforce more effectively. Controllers and managers may make PCS moves for promotions,downgrades, or lateral transfers. To be eligible for promotion within the controller or manager ranks or from controller to manager, individuals may be required to make a PCS move. For example, promotion for a controller may require making a PCS move to a higher-level facility (i.e., one with higher levels of operational complexity). Promotion for a manager may require gaining greater experience with more complex and diverse air traffic operations. This may involve a PCS move to a regional office or FAA headquarters for policy and management experience. To be eligible for promotion from controller to manager, an individual may have to move to a lower-level facility where supervisory positions are available, to a regional office, or to FAA headquarters. Downgrades and lateral transfers are generally made for personal reasons but may also benefit the government. Under title 5 rules, federal agencies may elect to pay for the expenses of transportation of immediate family and of household goods and personal effects to and from the assignment location for a PCS move when it is in the interest of the federal government. According to FAA Air Traffic Services and Human Resources officials, FAA historically interpreted title 5 rules as a requirement to fully reimburse all PCS moves, since FAA considered all such moves to be in the interest of the government. As part of its personnel reform, FAA delegated the authority to determine eligibility for and the amount of PCS benefits to each line of business and provided three PCS funding options: (1) full PCS reimbursement, (2) fixed relocation payments, and (3) unfunded moves. If the move is determined to be in the interest of the government, FAA will fully reimburse the individual for costs associated with the move. According to FAA, the average agencywide PCS cost for fully reimbursed PCS moves in fiscal year 2001 was about $54,000 (based on a sample of 100 fully funded PCS moves in that fiscal year.) Under its personnel reform, FAA may offer a fixed relocation payment if it determines that the agency will derive some benefit from a move, even though the move is not in the interest of the government. For example, Air Traffic Services may offer a fixed relocation payment as a recruitment tool, when necessary, to attract enough qualified candidates for a position. If a move is not in the interest of the government and FAA does not determine that it will derive some benefit from the move, there is no basis for offering PCS funding. However, as a result of FAA’s personnel reforms, employees may choose to make unfunded moves at their own expense for personal reasons, to gain experience needed for professional advancement, or for promotion. Before 1996, when FAA’s policy did not allow unfunded moves, many vacancies went unfilled for lack of PCS funds, according to FAA’s Personnel Reform Executive Committee Task Force Report. The intent of the change in policy was to (1) improve employee morale by allowing willing employees to relocate and (2) allow FAA to relocate more employees without increasing the PCS budget. In February 2000, FAA signed a memorandum of understanding with NATCA that allowed FAA to offer controllers unfunded PCS moves to higher-level facilities. These moves to higher-level facilities are considered promotions because controllers’ pay increases with the level of the facility. FAA’s policies on eligibility for PCS reimbursement, created as a result of FAA’s 1996 personnel reform and implemented for air traffic controllers in the agency’s February 2000 memorandum of understanding with NATCA, do not differentiate between air traffic controllers and managers. However, the amount of the fixed relocation payment that Air Traffic Services may offer controllers and managers for PCS moves does differ. The February 2000 memorandum of understanding established a fixed relocation payment of $27,000 for controllers as a result of negotiations between FAA management and NATCA. This amount is set for all fixed relocation payments provided to controllers. Conversely, the amounts of fixed relocation payments for air traffic control managers are determined on a case-by-case basis up to a maximum of $25,000. The average PCS fixed relocation payment for managers’ moves between field offices during fiscal years 1999 through 2001 (based on FAA estimates) was about $19,500. Air traffic controllers were less likely than air traffic managers to receive funding for their moving expenses when moving between facilities. According to Air Traffic Services data, controllers and managers made 1,466 and 173 PCS moves, respectively, between field facilities from fiscal year 1999 through fiscal year 2001; these moves comprise 78 percent of all 2,107 Air Traffic PCS moves. About half of those moves (864) were for promotions. As shown in figure 1, 84 percent of controllers’ PCS moves between field facilities for promotions (651 of 774) were unfunded during fiscal years 1999 through 2001, while 62 percent of managers’ PCS moves for promotions (56 of 90) were unfunded. Similarly, controllers were less likely than managers to receive funding for lateral moves. From fiscal year 1999 through fiscal year 2001, controllers and managers made 291 PCS moves for lateral assignment between field facilities. As shown in figure 2, 94 percent of controllers’ lateral moves (236 of 250) were unfunded, compared with 66 percent of managers’ lateral moves (27 of 41). Data were not available on the type of funding alternatives used for other PCS moves (from headquarters to the field, for example, and from regional offices to headquarters). However, data on whether any type of funding was provided for these other moves indicated that 91 percent of those by controllers were unfunded during fiscal years 1999 through 2001 (250 of 275), compared with 53 percent of those by managers (102 of 193). According to the February 2000 memorandum of understanding between FAA and NATCA, 65 percent of PCS funding is to be allocated to controllers and 35 percent to the rest of air traffic staff. Thus, while they account for 77 percent of the combined workforce, controllers get a smaller proportion—65 percent—of air traffic PCS funding. FAA officials said that this resulted in a higher percentage of managers who received funding for PCS moves. Although managers were more likely than controllers to receive funding for PCS moves for promotion in the field, they were less likely to make PCS moves between field locations for promotions. From fiscal year 1999 through fiscal year 2001, about 2 percent of the total population of managers (4,490) made promotional moves between field facilities, compared with about 5 percent of the controller workforce (15,248). Lateral and downgrade moves between field facilities during the same period accounted for less than 3 percent of managers’ and controllers’ respective workforces. For other PCS moves (between headquarters, regional offices, and field facilities), managers (4 percent) were more likely to make moves than controllers (2 percent). Although FAA officials said that PCS costs have decreased and FAA’s ability to quickly fill vacant controller positions has improved since the new PCS policies took effect, they did not have the data to determine to what extent the annual decreases or improvement in the agency’s ability to fill vacancies in field facilities are attributable to the new PCS policies implemented in 1998. For example, from fiscal year 1997, Air Traffic Services’ PCS costs decreased from $31.8 million to $17.5 million in fiscal year 1998 (see fig. 3). FAA has attributed these decreases to reductions in its budget rather than to the new PCS policies providing fixed relocation payments for PCS moves and allowing staff to pay for their own moves. However, officials noted that they lacked data to support this determination. FAA officials also said that the new PCS policies have improved their ability to fill controller vacancies in field facilities, but again, they lacked data to support their views. Officials from FAA’s Office of Human Resources said they had agencywide plans to begin collecting information on the time to fill positions and survey new recruits on, among other things, the reasons they applied for the position into which they were hired. This information should help FAA determine the impacts of its PCS policies. FAA also lacks data to respond to questions raised by the FAA Conference Managers Association about the potential impacts of FAA’s new PCS policies. In the Association’s view, the change from the determination that a promotional opportunity is in the best interest of the government (under title 5 rules) to a determination based on general criteria by each of the lines of business that only some promotional opportunities are in the best interest of the government (under rules revised as a part of personnel reform) made the decision-making process too subjective. In March 2002,Association representatives expressed concern about the potential for unintended effects of the change in FAA’s PCS policy including a reduction in the number of qualified applicants that could weaken FAA’s leadership and a reduction in the diversity of potential applicant pools that could result in discrimination in filling positions. The Association also said that a disparate provision of PCS benefits due to funding concerns could have a negative impact on morale. According to the managers association, some qualified managers may be reluctant to bid on opportunities for promotion because of the cost of partially or fully funding their own PCS moves. (As was shown in fig. 1, almost two-thirds of these moves for managers are unfunded.) The Association was concerned that, because not all qualified potential applicants may apply for promotions, less qualified managers may bid on and be selected for promotion opportunities because they are willing to make the financial commitment to pay for some or all of the costs associated with a PCS move. The Association believes this outcome could weaken the quality of FAA’s leadership. Another Association concern is that selecting officials may be unable to determine whether the pool of candidates who bid on unfunded PCS or fixed-funded PCS positions is representative of FAA managers. Specifically, the Association has suggested that this pool of candidates may not be as diverse as the pool of candidates who would bid on a position with a fully reimbursed PCS move. As a result, the Association believes the new PCS policies may inadvertently lead to discrimination. Finally, Association officials expressed concern that FAA’s implementation of the variable PCS policy would be affected by fluctuations in FAA’s budget. In their view, the effect of using PCS funding to create an incentive for filling hard-to-staff positions (as is done under the new policies) rather than to fully reimburse all PCS moves (as was done under title 5 rules) was to reduce the funding for PCS moves. With less PCS funding available, the officials said managers’ decisions to fund PCS moves could be more sensitive to current funding issues than to operational staffing needs. As a result, the Association said comparable positions could be filled in different budget years at the same location using different levels of PCS benefits. Thus, two managers could receive disparate PCS benefits for essentially the same type of move. The Association acknowledged that there were no data that showed these unintended effects had occurred. Likewise, without information such as the qualifications of employees and managers who applied for promotions before and after the change in policies, the qualifications of those who did not apply, and the funding for comparable positions over time, we could not determine whether the potential unintended effects identified by the Association had occurred. Air Traffic Services officials said they were still reviewing the concerns and planned to comment in the near future. We provided a copy of the draft report to Department of Transportation and FAA officials who agreed with the contents of the report and provided a technical clarification regarding our description of the allocation of PCS funding under the 2000 Memorandum of Agreement between FAA and NATCA. They did not provide written comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of this report to interested congressional committees and to the Honorable Norman Y. Mineta, Secretary of Transportation; the Honorable Marion Blakely, Administrator, FAA; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or would like to discuss it further, I can be reached at (202) 512-2834. Key contributors to this report are acknowledged in appendix II. We obtained and analyzed data on trends in funding for permanent change of station (PCS) moves in the Federal Aviation Administration’s (FAA’s) Air Traffic Services line of business (the FAA line of business for air traffic controllers and air traffic managers) since fiscal year 1996 and analyzed data on the type of funding (fully funded, fixed payments, or unfunded) and purpose (promotion, lateral transfer, or downgrade) of controllers’ and managers’ PCS moves between field offices from 1999 through 2001, the only years for which these data were available. The PCS moves between field offices account for about 80 percent of all Air Traffic PCS moves. The only information available for other moves (for example, between headquarters and field offices or between regional offices and headquarters) was the total number of moves and whether they were funded or unfunded. To assess the reliability of the data, we (1) discussed the data collection methods with responsible agency staff and (2) reviewed the information for reasonableness. We did not independently verify these data. In addition to the individual named above, Elizabeth Eisenstadt, Michele Fejfar, David Hooper, Chris Keisling, and E. Jerry Seigler made key contributions to this report. | In fiscal year 2001, the Federal Aviation Administration (FAA) spent more than $15 million to move air traffic controllers and their managers to new permanent duty locations. FAA classifies the funds that it spends for these moves as permanent change of station (PCS) benefits. In 1998, as part of a broader effort to reform its personnel policies, FAA changed its policies on PCS benefits. Instead of fully reimbursing the costs of all PCS moves and prohibiting unfunded PCS moves, as it once did, FAA now determines the amount of PCS benefits to be offered on a position-by-position basis and allows employees and managers to move at their own expense. Under its new polices, FAA can fully reimburse the costs of a move if it determines that he move is in the interest of the government, or it can offer partial fixed relocation benefits if it determines that the agency will derive some benefit from the move. FAA's policies on eligibility for PCS benefits are the same for air traffic controllers and their managers, but the amounts of the benefits vary. According to these policies, eligibility depends on a determining official's decision about how critical a position is and/or whether FAA will benefit from the move. Air traffic controllers have been less likely than air traffic managers to be offered PCS benefits when they move between facilities. Between fiscal year 1999 and 2001, Air Traffic Services funded 16 percent of moves involving a promotion and 6 percent of lateral moves between field facilities for controllers, compared with 38 percent of promotional moves and 34 percent of lateral moves for managers. According to FAA officials, PCS costs have decreased and FAA's ability to quickly fill vacant controller positions has improved since the new PCS policies took effect. |
Created by the Deputy Secretary of Defense in January 2006, JIEDDO is responsible for leading, advocating, and coordinating all DOD actions in support of the combatant commanders’ and their respective joint task forces’ efforts to defeat IEDs as weapons of strategic influence. Prior DOD efforts to defeat IEDs included various process teams and task forces. For example, DOD established the Joint IED Defeat Task Force in June 2005 for which the Army provided primary administrative support. This task force replaced the Army IED Task Force, the Joint IED Task Force, and the Under Secretary of Defense, Force Protection Working Group. To focus all of DOD’s efforts and minimize duplication, DOD published a new counter-IED policy in February 2006 through DOD Directive 2000.19E, which changed the name of the Joint IED Defeat Task Force to JIEDDO and established it as a joint entity and jointly staffed organization within DOD, reporting directly to the Deputy Secretary of Defense. The directive states that JIEDDO shall “focus” (i.e., lead, advocate, and coordinate) all DOD actions in support of the Combatant Commanders’ and their respective Joint Task Forces’ efforts to defeat IEDs as “weapons of strategic influence.” GAO, Defense Management: More Transparency Needed over the Financial and Human Capital Operations of the Joint Improvised Explosive Device Defeat Organization, GAO-08-342 (Washington D.C.: Mar. 6, 2008). controls. DOD and JIEDDO agreed with our recommendations and have taken actions in response. Beginning in February 2006, JIEDDO has been responsible for developing DOD’s IED defeat strategic plan for countering the IED threat, but its strategic-planning actions have not followed leading strategic management practices, or have since been discontinued. In March 2007, we found that JIEDDO had not developed a strategic plan and as a result could not assess whether it was making the right investment decisions or whether it had effectively organized itself to meet its mission. We recommended that the Secretary of Defense require the Director of JIEDDO, in developing DOD’s IED defeat strategic plan, to clearly articulate JIEDDO’s mission and specify goals, objectives, and measures of effectiveness. JIEDDO fully concurred with our recommendations and was working to complete a strategic plan when we issued this report, and in September 2007, JIEDDO completed its DOD- wide counter-IED strategic plan. However, JIEDDO’s 2007 strategic plan did not contain a means of measuring its performance outcomes, which is a leading strategic management practice. Subsequent JIEDDO strategic- planning efforts also did not follow leading strategic management practices or have been discontinued. For example, JIEDDO’s 2009–2010 strategic plan contained performance measures, but JIEDDO discontinued using these measures because they determined that the data from these measures were not relevant to the organization’s goals. We have previously reported that good strategic planning helps organizations (1) make the key decisions that will drive their actions, (2) measure the effectiveness of their actions to achieve intended results, and (3) if not achieving intended results, have the data to determine modifications needed to achieve intended results—all attributes of a plan that helps maximize organizational resources. Many of JIEDDO’s plans contained output measures such as the percentage of initiatives for which JIEDDO completes operational assessments or the percentage of counter-IED initiatives that were adopted by one of the Military Services. While collecting outputs is an important initial step in measuring progress, they do not provide information about progress toward achieving JIEDDO’s mission as outcome measures would. Since 2006, JIEDDO has made several attempts to develop a counter- IED strategic plan including its 2007 and 2009–2010 strategic plans, which in the case of the 2007 plan, included elements for guiding DOD subordinate organizations and the military services involved with counter- IEDs in developing their own counter-IED planning. However, those plans did not have outcome-related goals specific enough for JIEDDO and these organizations to be able to develop enduring measures of effectiveness that inform DOD whether its counter-IED mission is being met. As shown in figure 1, we identified 17 key actions or triggering events applicable to DOD that were to either produce counter-IED strategic plans for the department or further develop the strategic plans. However, the 17 actions have either been discontinued or did not satisfy key strategic-management-planning practices, including developing results-oriented strategic goals, performance measures, and the adjustment of plans or intended actions based on the results of these measures. For some of the 17 actions and events, we found that while JIEDDO had made efforts to satisfy leading strategic management practices, these efforts fell short of developing results-oriented goals and performance measures that link with DOD’s counter-IED mission. We assessed some efforts as partially fulfilling strategic management practices because developing output measures is a step toward developing outcome measures, and measuring individual initiatives contributes toward the overall counter-IED effort. However, JIEDDO has not expanded its assessments beyond these individual efforts and determined how these efforts, overall, help to achieve DOD’s counter-IED mission. In early January 2012, JIEDDO issued its counter-IED strategic plan for 2012–2016, which established five principal goals for JIEDDO with three to six supporting objectives for each goal. This plan did not specify what actions JIEDDO planned to take to achieve these goals. On January 19, 2012, JIEDDO augmented its strategic plan by issuing an annex detailing numerous actions to achieve these objectives, and establishing 230 separate metrics that JIEDDO expects will provide the means of assessing its progress. In addition, JIEDDO is planning to begin, in March 2012, quarterly internal reviews to assess progress and make adjustments to its counter-IED efforts accordingly. Such action has not been a step JIEDDO has included in its past efforts. We see good potential in JIEDDO’s strategic plan; however, because the portion of the plan relevant to our recommendations was issued on January 19, 2012— shortly before issuance of this report—we did not evaluate the plan and have not therefore assessed the extent to which this new plan will follow leading strategic management practices and provide results-oriented strategic goals and sufficient performance metrics for JIEDDO. Further, according to JIEDDO officials, the strategic plan applies only to counter- IED efforts managed by JIEDDO and does not apply to all other counter- IED efforts departmentwide. Consequently, successful implementation of JIEDDO’s strategic plan alone will not provide the means necessary for determining the effectiveness of all counter-IED efforts across DOD. According to JIEDDO officials, DOD will produce a departmentwide counter-IED strategic plan in the future, but there is no specified timeline for issuance of this plan. As JIEDDO moves forward to implement its counter-IED strategic plan, and DOD develops a departmentwide counter-IED strategic plan, DOD will continue to face difficulty in developing measures of effectiveness, if it does not have results-oriented strategic goals to accompany DOD’s general counter-IED mission statement. The department has identified eliminating IEDs as a weapon of strategic influence as the overarching mission of its counter-IED programs, but has not translated this mission into actionable goals and objectives. Without actionable goals and objectives established by DOD, JIEDDO, and other DOD components cannot tie individual performance measures to DOD’s desired outcomes. As a result, DOD and external stakeholders are left without a comprehensive, data-driven assessment as to whether DOD’s counter- IED efforts are achieving DOD’s mission. Furthermore, without a means to measure the success of JIEDDO’s efforts in achieving DOD’s counter- IED mission, JIEDDO’s basis for determining how to invest its resources among its three lines of organizational effort—to attack the network, defeat the device, and train the force—is limited. While JIEDDO has established procedures to assess counter-IED gaps and prioritize and manage its requirements and individual investments—including coordinating and collaborating with various DOD entities—to rapidly pursue these critical lines of effort, JIEDDO and DOD are not informed about the overall effectiveness of their counter-IED efforts and use of resources as they relate to DOD’s mission. Lastly, JIEDDO has not had a completed, fully developed strategic plan until recently, with long-term strategic goals that informed incoming directors about which actions have taken place and which must be continued in order to maintain continuous progress toward achieving long-term goals. Having such a strategic plan would have benefitted JIEDDO leadership as JIEDDO’s directors changed four times over the 6 years JIEDDO has existed (see fig. 1). Without this framework, new strategic-planning efforts have been initiated under each of these directors to improve the organization and manage counter-IED support— efforts that contributed in varying degrees to strategic management but, as discussed above, were not implemented or were discontinued in many instances. Now that JIEDDO has completed its strategic plan, it should work to ensure that implementation helps provide continuity for the organization as JIEDDO leadership changes in the future. A provision in the National Defense Authorization Act for Fiscal Year 2012 repealed certain quarterly reporting requirements. See Pub. L. No. 112-81, § 1062(d)(5) (2011). However, reporting requirements for other information related to counter IED efforts may remain. comprehensive list of counter-IED efforts that would provide a better basis for determining key efforts to report to Congress. Technology Matrix Database: In 2009, DOD developed this database through JIEDDO in response to our recommendation to establish a comprehensive counter-IED database, expending a total of $225,000. JIEDDO requested sponsorship from DOD Deputy Director for Research and Engineering to make this database an official repository of DOD technology information for counter-IED efforts, and require full participation of all DOD entities. However, the database was not fully developed in its concept, structure, and procedures. Thus, the Research and Engineering officials did not require all organizations involved in developing counter-IED solutions to use this database until these shortcomings were addressed. Without this requirement from Research and Engineering, JIEDDO concluded that the database could not provide comprehensive counter-IED information as intended, and JIEDDO discontinued using this database for this purpose in early 2010 and looked to other ongoing alternatives to provide this capability. Tripwire Analytical Capability (TAC): JIEDDO acquired and further developed this system in 2009 for intelligence querying purposes but also explored this system for possible use in collecting comprehensive data on DOD’s counter-IED initiatives managed by the military services and other DOD agencies outside of JIEDDO, automatically through programmed computer interfaces. JIEDDO considered using this data to populate a JIEDDO counter-IED database. However, according to JIEDDO officials, JIEDDO subsequently determined that less expensive commercially available alternatives were available and discontinued its exploration of TAC in May 2011 for collecting DOD counter-IED data. At the time JIEDDO ceased considering TAC for use in collecting data on DOD’s counter-IED initiatives, JIEDDO had not expended any additional funds on TAC specifically for this purpose. JIEDDO is currently developing a new JIEDDO-wide information technology architecture and plans to develop a database for counter-IED efforts across DOD as part of this new architecture. This effort is in the conceptualization stage, and JIEDDO officials do not anticipate completion before the end of fiscal year 2012. Further JIEDDO does not have an implementation plan that includes a detailed timeline with milestones, a key management practice, to help track its progress in achieving this goal. JIEDDO’s expenditure tracking system, does not differentiate between expenditures it makes that constitute overhead and infrastructure and expenditures it makes that JIEDDO considers to be separate, stand alone counter-IED initiatives. initiatives, and therefore, when JIEDDO began implementation of this process in May 2011, it had to review one by one its 887 funds tracking system numbers to separate its stand-alone counter-IED initiatives from overhead. JIEDDO completed this review June 17, 2011, and concluded that 223, or approximately 25 percent, of JIEDDO’s 887 expenditure- tracking-system control numbers were currently active stand-alone counter-IED initiatives. While this list could provide JIEDDO and external stakeholders with a comprehensive inventory of active JIEDDO- funded counter-IED initiatives, it is incomplete because it does not identify or separate out inactive stand-alone counter-IED initiatives from administrative overhead expenditures. According to JIEDDO officials, JIEDDO could produce a comprehensive list of its counter-IED initiatives in a matter of 2 to 4 days, but had not done so as of December 15, 2011. Further, if JIEDDO did produce such a list, it would represent one point in time and would not provide a comprehensive DOD-wide database of counter-IED efforts because it would not include counter-IED efforts funded and managed by other DOD components independently of JIEDDO. GAO, Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue GAO-11-318SP (Washington D.C.: Mar. 1, 2011). available on the six initiatives are classified, the efforts exhibit a range of different approaches regarding physical size, weight, and cost, and data show that the various DOD components involved have spent about $104 million collectively on these efforts to date. However, given the lack of a DOD-wide counter-IED database, there could be more directed energy efforts that we have not identified. Moreover, concerns regarding the fragmentation and duplication in DOD’s directed energy counter-IED efforts have risen to the highest levels within the warfighter community. Specifically, the commander of U.S. Central Command, in August 2011, conveyed concern regarding issues including apparent “duplicity of effort” in directed energy technology with organizations (in DOD) working different solutions. The correspondence called for coordination and cooperation by DOD on its directed energy efforts to develop a directed energy system that works in theater as quickly as possible given that the development has been under way since 2008. In response, in August 2011, JIEDDO, as DOD’s coordinating agency for these efforts, developed a plan and, in September 2011, brought various service program offices together to develop a solution as soon as possible. According to JIEDDO officials, the six systems will continue in development through fiscal year 2012, at which point, JIEDDO will determine which of the systems best satisfies U.S. Central Command’s requirement. While this new approach may eliminate future unnecessary duplication of effort, earlier coordination and better visibility could have prevented duplication that may have occurred up to this point. According to JIEDDO officials, the level of concern expressed and the fact that the concern was expressed in writing resulted in JIEDDO being able to secure the cooperation needed by the various organizations working different directed energy solutions to coordinate in this instance. However, this is a unique occurrence because, according to JIEDDO officials, JIEDDO does not have the authority to direct—i.e., compel— various DOD organizations that may be working on overlapping technologies or efforts to reach consensus regarding selection among competing alternatives. Therefore, JIEDDO has not always been successful in securing the cooperation of the services to coordinate on counter-IED efforts. Radio Frequency Jamming Systems: The Army and Navy continue to pursue separate developments of counter-IED jamming systems, which provide a limited radius of protection to prevent IEDs from being triggered by an enemy’s radio signals. In 2007, DOD established the Navy as single manager and executive agent for ground-based jamming. Under DOD Directive 5101.14, military services may conduct ground-based jammer research and development to satisfy military service-unique requirements if the requirements are coordinated before initiation with the DOD’s single manager for jammers and, for any system or system modifications resulting from such efforts, operational technical characteristics and logistics plans are approved by the single manager. The Navy has developed a standard technology and system for ground-based jamming called JCREW I1B1, which DOD has designated as the ground-based jamming program for the entire Department. However, the Army has continued to develop its own ground-based jamming system called Duke. According to Navy officials, in 2010, the Army continued to develop new technology for insertion into its Duke system—expected to cost about $1.062 billion when completed and installed—without notifying and coordinating with the Navy as DOD’s single manager for ground-based jammer technology. According to Army officials, the Army is pursuing development of its own system because it intends to expand the use of this technology for purposes other than countering IEDs such as jamming enemy command, control, and communication systems. However, according to Navy officials, the CREW system’s technology has the flexibility and capacity to expand and provide the same additional functions as the Army plans for its Duke system. Moreover, according to Navy officials, the Navy’s system is further along in its development. Because the Navy and Army are pursuing separate jamming systems, it is not clear if DOD is taking the most cost-effective approach. While, according to JIEDDO officials, the Office of Secretary of Defense was considering how to resolve this issue, a decision had not been made before this report was completed. Regardless of the final outcome however, a more coordinated approach early in the process when initiating programs of this magnitude could prevent unnecessary duplication in costs and effort. Electronic Data Collection Systems: According to JIEDDO officials, JIEDDO has funded the development and support of approximately 70 electronic data collection and analysis tools that overlap to some degree because they include capabilities to collect, analyze, and store data to help the warfighter combat the IED threat. Although JIEDDO recently reported that it could not verify total funding for its information technology investments, GAO determined through a review of DOD financial records that DOD has expended at least $184 million collectively on information technology development for its data collection and analysis tools. According to JIEDDO officials, JIEDDO is aware of the redundancy within these electronic tools. In April 2011, the JIEDDO Deputy Director for Information Management raised the issue of redundancy in JIEDDO’s information technology systems including its counter-IED data collection and analysis systems and tools. Consequently, since April 2011, JIEDDO has worked to eliminate overlapping information-technology capabilities where feasible including among the approximately 70 analytical tools JIEDDO has funded and developed for use in countering IED networks. For example, on July 1, 2011, JIEDDO discontinued funding for one of these initiatives—Tripwire Analytical Capability (TAC)—citing as reasons TAC’s limited purpose, high cost, and duplicative capabilities. However, in making the decision to discontinue TAC yet continue operating the other data collection and analysis tools, JIEDDO had not compared and quantified all of the potential options to streamline or consolidate these tools to create a single collective system that includes extracting data on counter-IED efforts across DOD. As a result, JIEDDO cannot be certain it is pursuing the most advantageous approach for collecting, analyzing, storing, and using available data for combating the IED threat. Further, although JIEDDO has discontinued funding TAC, the Defense Intelligence Agency is continuing to develop the tool for its own use, resulting in the potential for DOD-wide duplication between TAC and JIEDDO’s other data collection and analysis tools. Six years after DOD established JIEDDO as its coordinating agency to lead, advocate, and coordinate responses to the IED threat across the department, DOD continues to lack comprehensive visibility of its counter- IED expenditures and investments, including those from JIEDDO, the military services, and relevant DOD agencies. The absence of a strategic plan with outcome-oriented goals and visibility over DOD’s counter-IED efforts are recurring themes that we have identified in prior reports as affecting JIEDDO’s ability to manage DOD’s efforts effectively and efficiently. JIEDDO has demonstrated progress in addressing previously raised issues—by developing a formal, more rigorous internal control system, and in issuing a 2012–2016 strategic plan for the management of JIEDDO’s counter-IED efforts—but these actions have not fully addressed the issues we have raised in this report. Specifically, DOD has not implemented adequate actions to (1) provide a comprehensive plan to ensure that all DOD counter-IED efforts are strategically managed in order to achieve its goal to defeat IEDs as a weapon of strategic influence, and (2) comprehensively list all DOD-wide counter-IED initiatives in a database that provides internal and external parties with visibility into the department’s counter-IED efforts. Without a comprehensive plan and listing of its counter-IED initiatives, DOD continues to risk fragmentation, overlap, and duplication in its counter-IED efforts, such as those identified in this report, as well as lack the ability to prioritize projects within future budget levels. Given the limited applicability of JIEDDO’s recently issued strategic plan and the limited progress JIEDDO has made in implementing our prior recommendation regarding developing a comprehensive listing of DOD-wide efforts, it is critical that DOD places greater focus and emphasis on the actions it takes in addressing these issues. As the nation addresses fiscal challenges, and DOD is directed to identify efficiencies, it will need to reduce and eliminate unnecessary duplicative counter-IED initiatives. We therefore reiterate our prior recommendation that the Secretary of Defense direct the military services to work with JIEDDO to develop a database for all DOD’s counter-IED initiatives. In addition to the prior recommendation reiterated above that remains open, we recommend that the Secretary of Defense direct the Deputy Secretary of Defense, who is responsible for direction and control of JIEDDO, to take the following four actions: Define outcome-related strategic goals associated with DOD’s counter-IED mission to enable the development of measures of effectiveness that will help to determine progress of DOD’s counter- IED efforts. Assess JIEDDO’s recently completed strategic plan and its implementation to ensure that it incorporates outcome-related strategic goals, includes sufficient measures of effectiveness to gauge progress, and uses the data collected from these metrics to adjust its counter- IED efforts, as needed. Develop an implementation plan for the establishment of DOD’s counter- IED database including a detailed timeline with milestones to help achieve this goal. Develop a process to use DOD’s counter-IED database once it is established to identify and compare all counter-IED initiatives and activities, to enable program monitoring, and reduce any duplication, overlap, and fragmentation among counter-IED initiatives. In written comments on a draft of this report, DOD concurred with the third of our four recommendations—to develop an implementation plan for the establishment of DOD’s counter-IED database—and did not concur with the other three. DOD’s written comments are included in appendix II. DOD also provided technical comments that we have incorporated into this report where appropriate. In disagreeing with our first recommendation for the Deputy Secretary of Defense to define outcome-related strategic goals associated with DOD’s counter-IED mission to enable the development of measures of effectiveness that will help to determine progress of DOD’s counter-IED efforts, the department stated that the JIEDDO Director has accomplished this task by issuing its 2012–2016 counter-IED strategic plan in January 2012. While we agree that the recent issuance of JIEDDO’s plan is a positive development, it does not fully address our recommendation because the plan does not apply to all counter-IED efforts departmentwide. According to JIEDDO officials, the plan applies to the management of JIEDDO counter-IED efforts only and has not been adopted as DOD’s strategic plan for managing all of its counter-IED expenditures and investments from the military services and relevant DOD agencies. Therefore, JIEDDO’s strategic goals do not satisfy the need for the Deputy Secretary of Defense to define outcome-related strategic goals for the department taken as a whole, and we believe our recommendation remains valid. In disagreeing with our second recommendation for the Deputy Secretary of Defense to document and assess JIEDDO’s strategic plan to ensure that it incorporates outcome-related strategic goals, includes sufficient measures of effectiveness to gauge progress, and uses the data collected from these metrics to adjust its counter-IED efforts, as needed, the department stated that JIEDDO has established outcome-related strategic goals and measures of effectiveness in its January 2012 strategic plan and related implementation plan. DOD further stated that in March 2012 JIEDDO will begin quarterly internal reviews to assess progress against its goals and make adjustments to its counter-IED efforts. Completion of JIEDDO’s strategic plan is a positive step; however, because the portion of the plan relevant to our prior recommendations— the annex containing measures of effectiveness, timelines, and goals— was issued on January 19, 2012, we were unable to evaluate the plan before issuance of this report and therefore cannot comment on its adequacy relative to our recommendations. However, JIEDDO’s numerous prior strategic-planning actions have not followed leading strategic management practices, or have been discontinued. Therefore, JIEDDO’s recently completed counter-IED strategic plan and plans for internal quarterly reviews alone do not negate the need for the Deputy Secretary of Defense to assess the adequacy of JIEDDO’s strategic plan and its implementation, and we believe our recommendation remains valid. However, we modified the language in our recommendation to reflect the fact that JIEDDO has now recently issued a strategic plan and to clarify that the remaining action needed by the Deputy Secretary of Defense is to assess its adequacy and implementation. In concurring with our third recommendation for the Deputy Secretary of Defense to develop an implementation plan for the establishment of DOD’s counter- IED database, DOD stated that DOD Directive 2000.19E is currently being revised to create a requirement for Combatant Commands, military services, and DOD agencies to report counter-IED initiatives to JIEDDO. According to DOD, this step will enable JIEDDO to develop a database for all DOD counter-IED initiatives. We agree that establishing this requirement should help JIEDDO’s counter-IED database development; however, according to JIEDDO officials, DOD Directive 2000.19E has been under revision for 2 years without DOD issuing a new directive. Therefore, it is critical that DOD complete this task as soon as possible to enable JIEDDO to develop its planned counter-IED database as described in DOD’s comments. In disagreeing with our fourth recommendation for the Deputy Secretary of Defense to develop a means to identify and reduce any duplication, overlap, and fragmentation among counter-IED initiatives, DOD stated that it had existing processes and organizations including JIEDDO and its Senior Integration Group to facilitate coordination and collaboration with the military services and across DOD, which would address this recommendation. We agree that existing DOD processes such as JIEDDO’s Capabilities Development Process and DOD’s Senior Integration Group prioritization process can be helpful in coordinating DOD’s counter-IED efforts. However, the effectiveness of DOD’s existing coordination and collaboration processes has been limited given that these processes did not prevent the issues of potential duplication we identified in this report. For example, in the case of DOD’s directed energy counter-IED efforts where DOD has collectively expended $104 million, the processes cited by DOD in its response did not identify and resolve the fragmentation and potential duplication present in these efforts. As a result, the commander of U.S. Central Command, as mentioned previously, protested in writing to DOD officials about potential duplication of efforts. Without an adequate process to use DOD’s counter- IED database, once it is developed, DOD will continue to lack assurance that it is identifying and addressing instances of potential duplication before making significant investments. In finalizing our report, we modified the wording of our recommendation to clarify our intent that DOD establish a process (rather than a means) to use its counter-IED database once it is established. In addition to comments on our recommendations, DOD questioned the accuracy of our statements regarding the soundness of JIEDDO’s prioritization and resource allocation determinations. Specifically, DOD stated that our report was inaccurate in stating that JIEDDO does not have a sound basis to determine how to invest DOD’s resources among the lines of operation: attack the network, defeat the device, and train the force. DOD further stated that JIEDDO has established procedures to assess counter-IED gaps and prioritize requirements in coordination with warfighting commanders and that JIEDDO coordinates counter-IED initiatives with numerous DOD offices, which DOD concluded ensures warfighting priorities, effectiveness of fielded counter-IED efforts, and cost reasonableness are addressed and evaluated. DOD also asserted that JIEDDO’s existing programming and prioritization processes align JIEDDO’s investment resources with Combatant Commander priorities. We recognize that JIEDDO has resource allocation and prioritization processes in place and have modified the language of this report to acknowledge these processes where applicable. However, we maintain our position that JIEDDO’s basis for determination of resource allocations and prioritizations is limited because DOD has not been able to identify all of its counter-IED efforts, as stated above, and lacks actionable goals and objectives needed to tie JIEDDO’s and the Department’s performance measures to outcomes that would assess its counter-IED efforts. Therefore, DOD does not have full assurance that its investments are achieving its strategic goal in the counter-IED fight. We are sending copies of this report to other interested congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (404) 679-1808 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. We considered counter-IED strategic planning efforts from February 2006 through 2012. To analyze the extent to which DOD has provided a comprehensive counter-IED strategic plan including strategic results- oriented goals and metrics that determine the effectiveness of efforts across DOD to combat IEDs, we collected and reviewed DOD’s counter- IED strategic-planning documents from JIEDDO. We also reviewed prior GAO reports and work papers involving strategic planning and management, both for JIEDDO and for the government in general. We used these GAO reports to identify leading strategic management practices derived from leading strategic management principles demonstrated by successful results-oriented organizations for use as evaluation criteria in this review. In addition, we interviewed JIEDDO officials involved in strategic planning and assessment to learn about the implementation of the actions detailed in the counter-IED strategic planning documents collected. Furthermore, we attended a counter-IED conference sponsored by JIEDDO in March 2011 that focused on a key element of strategic planning and management—measuring outcomes and performance—to observe and collect additional information relevant to DOD’s counter-IED strategic management. From the documents collected and interviews conducted, we identified several triggering actions that either provided the impetus for, or resulted in, counter-IED strategic management efforts in JIEDDO or elsewhere in DOD. We compared these actions against leading strategic management criteria described above and rated each according to its fulfillment of these leading practices. We considered counter-IED efforts from fiscal years 2006 through 2011 managed by DOD components with involvement in counter-IED efforts: JIEDDO, military services, combatant commands, and defense agencies. To determine the extent to which DOD has identified counter-IED initiatives and activities, and coordinated these efforts, we reviewed JIEDDO databases on counter-IED efforts and interviewed OSD, military service and JIEDDO officials to discuss the availability of data about additional counter-IED efforts/initiatives. Through our interactions with JIEDDO officials, we determined that the best, most comprehensive repository of counter-IED information that currently existed was the Technology Matrix. We analyzed the Technology Matrix to obtain a list of persons, for each organization, who had entered information regarding counter-IED efforts into the database. Additionally, we reviewed and analyzed prior GAO counter-IED work to obtain relevant contact information, obtained current contact information of relevant organizations through our Inspector General liaison, and reviewed and analyzed other external sources of information, which contained relevant organizations. We interviewed OSD, Service, and JIEDDO officials to discuss and determine awareness of DOD’s counter-IED efforts. To determine the effects of the absence of a comprehensive DOD listing of counter-IED initiatives within the department, we assessed whether DOD components continue to independently pursue counter-IED efforts that may be redundant or overlapping. We updated counter-IED initiatives case studies that we previously reported as having redundancy of effort and developed additional case studies of overlapping counter-IED efforts within DOD. We purposefully selected the additional case studies based on information in interviews with DOD officials or in data or documentation collected during this review that evidenced similar capabilities and objectives among two or more counter-IED efforts. In each case study, we compared the overlapping counter-IED efforts to determine and describe the degree of redundancy and potential duplication among the efforts for each case study given overlap of the capabilities and functions of these systems. The following table identifies 17 key actions or triggering events applicable to DOD that were to either produce counter-IED strategic plans for the department or further develop the strategic plans. This table is similar to figure 1 but shows the interactive text without needing the interactive computer capability. In addition to the contact named above, key contributors to this report were Grace Coleman, Rajiv D’Cruz, Emily Norman, Michael Shaughnessy, Rebecca Shea, Michael Silver, Amie Steele, William M. Solis, John Strong, and Tristan T.To. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington D.C.: March 1, 2011. Warfighter Support: DOD’s Urgent Needs Processes Need a More Comprehensive Approach and Evaluation for Potential Consolidation. GAO-11-273. Washington D.C.: March 1, 2011. Warfighter Support: Actions Needed to Improve the Joint Improvised Explosive Device Defeat Organization’s System of Internal Control. GAO-10-660. Washington D.C.: July 1, 2010. Warfighter Support: Improvements to DOD’s Urgent Needs Processes Would Enhance Oversight and Expedite Efforts to Meet Critical Warfighter Needs. GAO-10-460. Washington D.C.: April 30, 2010. Unmanned Aircraft Systems: Comprehensive Planning and a Results- Oriented Training Strategy Are Needed to Support Growing Inventories. GAO-10-331. Washington D.C.: March 26, 2010. Warfighter Support: Challenges Confronting DOD’s Ability to Coordinate and Oversee Its Counter-Improvised Explosive Devices Efforts. GAO-10-186T. Washington D.C.: October 29, 2009. Warfighter Support: Actions Needed to Improve Visibility and Coordination of DOD’s Counter-Improvised Explosive Device Efforts. GAO-10-95. Washington D.C.: October 29, 2009. Unmanned Aircraft Systems: Additional Actions Needed to Improve Management and Integration of DOD Efforts to Support Warfighter Needs. GAO-09-175. Washington D.C.: November 14, 2008. Defense Management: More Transparency Needed over the Financial and Human Capital Operations of the Joint Improvised Explosive Device Defeat Organization. GAO-08-342. Washington D.C.: March 6, 2008. Defense Business Transformation: Achieving Success Requires a Chief Management Officer to Provide Focus and Sustained Leadership. GAO-07-1072. Washington D.C.: September 5, 2007. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington D.C.: July 2, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington D.C.: January 17, 2003. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington D.C.: November 14, 2002. Executive Guide: Effectively Implementing the Government Performance and Results Act. GGD-96-118. Washington, D.C.: June 1, 1996. | Over $18 billion has been appropriated to the Joint Improvised Explosive Device (IED) Defeat Organization (JIEDDO) to address the improvised explosive device (IED) threat, and there is widespread consensus that this threat will continue to be influential in future conflicts. DOD established the Joint Improvised Explosive Device Defeat Organization (JIEDDO) in 2006 to lead, advocate, and coordinate all DOD actions in support of the combatant commanders and their respective joint task forces efforts to defeat IEDs. This report, one in a series on JIEDDOs management and operations, addresses the extent to which DOD (1) has provided a comprehensive counter-IED strategic plan including measurable objectives that determine the effectiveness of efforts across DOD to combat IEDs, and (2) has identified counter-IED initiatives and activities, and coordinated these efforts. To address these objectives GAO reviewed counter-IED efforts from fiscal years 2006 through 2011, reviewed and analyzed relevant strategic-planning documents, collected and reviewed data identifying DOD counter-IED efforts, and met with DOD and service officials. As the responsible DOD agency for leading, advocating, and coordinating all DOD efforts to defeat improvised explosive devices (IED) the Joint IED Defeat Organization (JIEDDO) was directed to develop DODs counter-IED strategic plan in February 2006 under DOD Directive 2000.19E. As previously recommended by GAO, JIEDDO has made several attempts to develop such a plan, but its strategic-planning actions have not followed leading strategic-management practices or have since been discontinued. For example, JIEDDOs 2007 strategic plan did not contain a means of measuring its performance outcomesa leading strategic-management practice. In addition, JIEDDOs 20092010 strategic plan contained performance measures, but JIEDDO discontinued using these measures because it later determined that the measures were not relevant to the organizations goals. Although DOD tasked JIEDDO to develop its counter-IED strategic plan, DOD has not translated DODs counter-IED general mission objective of eliminating IEDs as a weapon of strategic influence into actionable goals and objectives. JIEDDO issued a new counter-IED strategic plan in January 2012; however, the new plan does not apply to all other counter-IED efforts departmentwide, only to those managed by JIEDDO. Consequently, JIEDDOs new strategic plan alone will not provide the means necessary for determining the effectiveness of all counter-IED efforts across DOD. Further, as JIEDDO implements its plan, it will continue to face difficulty measuring effectiveness until DOD establishes and provides results-oriented goals to accompany its general mission objective. Without actionable goals and objectives established by DOD, JIEDDO and other DOD components cannot tie individual performance measures to DODs desired outcomes. As a result, DOD and external stakeholders will be left without a comprehensive, data-driven assessment as to whether their counter-IED efforts are achieving DODs mission and will not be informed about the overall effectiveness of its counter-IED efforts or use of resources as they relate to DODs mission. DOD has not fully identified its counter-IED initiatives and activities, and as a result is not able to effectively coordinate these efforts across DOD. In attempting to develop a comprehensive database, as previously recommended by GAO, JIEDDO has used at least three systems to collect and record complete information on DODs counter-IED efforts but discontinued each of them for reasons including lack of timeliness, comprehensiveness, or cost. For example, beginning in 2009, JIEDDO pursued Technology Matrix as a possible counter-IED database for all efforts within the DOD. However, JIEDDO discontinued support for Technology Matrix as a database since DOD did not require all relevant organizations to provide information to JIEDDO, and therefore it was not comprehensive. Without an automated means for comprehensively capturing data on all counter-IED efforts, the military services may be unaware of potential overlap, duplication, or fragmentation. For example, GAO identified six systems that DOD components developed to emit energy to neutralize IEDs, and DOD spent about $104 million collectively on these efforts, which could be duplicative because the military services did not collaborate on these efforts. Given the lack of a DOD-wide counter-IED database, other efforts may be overlapping. GAO recommends four actions for DOD to develop a comprehensive strategic plan with strategic outcome-related goals and a complete listing of counter-IED efforts to maximize its resources. DOD concurred with one of the recommendations but did not concur with three. GAO continues to believe that its recommendations are warranted as discussed in the report. |
Mortgage lenders keep the loans they originate in the primary market or sell them in the secondary, or resale, markets. In turn, purchasers of mortgage loans in the secondary markets either hold the loans in their own portfolios or, most often, pool together a group of loans to back MBS that are sold to investors or held in the originator’s portfolio. Secondary loan markets benefit lenders, borrowers, and investors in a number of ways. First, they allow lenders to manage their liquidity needs, reduce interest rate risk, and generate funds for additional lending. Second, they increase the amount of credit available to borrowers and help lower interest rates by fostering competition among lenders. Finally, they allow investors to further diversify their risks and to sell their interests on active secondary markets to other willing investors. Ginnie Mae was created in 1968 through an amendment to the National Housing Act. Organizationally, Ginnie Mae operates as a unit of HUD, and its administrative, staffing, and budgetary decisions are coordinated with HUD’s. Ginnie Mae defines its mission as expanding affordable housing in America by linking capital markets to the nation’s housing markets, largely by serving as the dominant secondary market vehicle for government- backed loan programs. These programs, which insure or guarantee mortgage loans that are originated in the private sector, are administered by a variety of federal agencies, including FHA, VA, RHS, and PIH. The government backing provided by these programs expands opportunities for homeownership to borrowers who may have difficulty obtaining a conventional mortgage. Ginnie Mae does not buy or sell loans or issue mortgage-backed securities. Rather, it provides guarantees backed by the full faith and credit of the U.S. government that investors will receive timely payments of principal and interest on securities supported by pools of government-backed loans, regardless of whether the borrower makes the underlying mortgage payment or the issuer makes timely payments on the MBS. Figure 1 shows the process of Ginnie Mae securitization. All mortgages in the Ginnie Mae pool must be insured or guaranteed by a government agency and have eligible interest rates and maturities. Ginnie Mae has several different products. Ginnie Mae’s original MBS program, Ginnie Mae I, requires that all pools contain similar types of mortgages (e.g., single family) with similar maturities and the same interest rates. The Ginnie Mae II MBS program, which was introduced in 1983, permits pools to contain loans with more heterogeneous loans. For example, the underlying mortgages in a pool can have varying interest rates and a pool can be created using adjustable rate mortgages (ARM). Ginnie Mae’s Multiclass Securities Program, introduced in 1994, includes, among other things, Real Estate Mortgage Investment Conduits (REMIC) and Ginnie Mae Platinum Securities. REMICs are designed to tailor the prepayment and interest rate risks associated with MBS to investors with varying investment goals. These products direct principal and interest payments from underlying MBS to classes, or tranches, with different principal balances, interest rates, and other characteristics. Ginnie Mae Platinum Securities allow investors to aggregate MBS with relatively small remaining principal balances and similar characteristics into new, more liquid securities. Investors in Ginnie Mae MBS face prepayment risk—that is, the possibility that borrowers will pay off their mortgages early, reducing the amount of interest earned. However, investors do not face credit risk—the possibility of loss from unpaid mortgages—because the underlying mortgages backing the pools are federally insured or guaranteed and Ginnie Mae guarantees timely payment of principal and interest. FHA’s single-family loan program and PIH’s loan guarantee programs insure nearly 100 percent of the loan amount. VA guarantees the lender against losses, subject to a cap equal to 25 percent to 50 percent of the loan amount based on the size of the loan; RHS guarantees up to 90 percent of the loan value. Issuers are responsible for delinquent loans in pools. When a Ginnie Mae issuer defaults in making timely payments of principal and interest to investors, Ginnie Mae makes the payments and takes over the issuer’s entire portfolio of government- backed loans that stand behind the securities that Ginnie Mae has guaranteed. Ginnie Mae charges issuers a guarantee fee for providing its guarantee of timely payment. The fee varies depending on the product and is six basis points for securities backed by single-family loans, which represent the majority of Ginnie Mae MBS. Issuers also pay a commitment fee that gives them the authority to pool mortgages into Ginnie Mae MBS. Issuers of Ginnie Mae securities may also collect a fee to cover the cost of servicing the underlying mortgages (generally 44 basis points for Ginnie Mae I products and 19 to 69 basis points for the Ginnie Mae II). Ginnie Mae does not receive appropriations or borrow money to finance its credit operations. The agency’s revenues exceed its expenses, which reduces the federal budget deficit. Ginnie Mae securities finance the great majority of FHA and VA loans, suggesting that the agency is fulfilling its basic mission, and faces relatively little competition in the market for government-backed mortgage loans. However, Ginnie Mae’s share of the total MBS market has declined over the last 20 years, both in terms of new issuances and volume outstanding, largely because FHA and VA loan origination has not kept pace with growth in the overall mortgage market and because securitization of conventional mortgages has become far more prevalent. Historically, the vast majority of government-backed housing loans have been pooled to back MBS for which Ginnie Mae guarantees the timely payment—a trend that continues today. Ginnie Mae issued its first MBS in 1970, and since that time it has guaranteed a cumulative total of more than $2 trillion of MBS. According to Ginnie Mae, its securities historically have represented roughly 90 percent of the market for FHA and VA loans. For example, between fiscal years 1998 and 2004 Ginnie Mae securities financed between about 84 percent and 96 percent of FHA-insured single-family loans (see fig. 2). In fiscal year 2004, Ginnie Mae issued a total of $149.1 billion in MBS. These MBS financed 91 percent of all eligible loans insured or guaranteed by FHA and VA. Ginnie Mae securities also have financed about half of RHS-guaranteed single-family loans since 1999 and financed roughly 40 percent of PIH-backed loans in fiscal year 2004. In 2004, newly issued Ginnie Mae securities financed $83.8 billion in FHA-insured loans, $31.4 billion in VA-guaranteed loans, and $1.6 billion in loans guaranteed by RHS and PIH. As shown in figure 3, FHA and VA loans represented 72 percent and 27 percent, respectively, of Ginnie Mae’s portfolio of new issuances that year, with RHS and PIH representing about 1 percent. About 92 percent of the loans backing Ginnie Mae MBS were single-family loans; the remainder were multifamily loans. Because Ginnie Mae’s charter keeps it focused on a discrete portion of the MBS market—specifically, that of loans made under FHA, VA, RHS, and PIH programs—the volume of Ginnie Mae’s new MBS issuance is linked directly to the origination volume of these programs. Changes in Ginnie Mae’s market volume over the years are thus largely a reflection of changes in the volume of FHA and VA loans, which represent 99 percent of Ginnie Mae’s portfolio. Although Ginnie Mae securities finance the great majority of the government-backed loans it is authorized to support, it does face potential competition from other secondary market entities. Federally insured and guaranteed loans can be expected to appeal to conventional securitizers because these loans carry little to no credit risk. However, Ginnie Mae has consistently captured 90 percent or more of the market for FHA and VA loans. Market participants told us that Ginnie Mae captured most of the market because of the difficulty of competing with the government guarantee of timely payment. This guarantee helps Ginnie Mae securities command a higher price and, correspondingly, offer a lower yield than other MBS of government-backed loans. We spoke with a number of secondary market participants that have or could become active in the market for government-backed loans, including the Federal Home Loan Banks, Fannie Mae, Freddie Mac, state and local government agencies, and private label issuers. In general, they have had limited or no involvement in Ginnie Mae’s market. Moreover, for a variety of reasons, they do not appear to have plans to encroach on Ginnie Mae’s market to any substantial degree, as the following examples illustrate: The Federal Home Loan Banks (FHLBank) have mortgage programs under which they purchase pools of conventional and federally insured or guaranteed mortgage loans from member banks. First authorized in 1998, the programs go by the names of the Mortgage Partnership Finance® program and the Mortgage Purchase Program. The programs were attractive to lenders in part because lenders could use them to sell their mortgages without paying guarantee fees. In 2000, FHLBanks took over a significant amount of Ginnie Mae’s market share and purchased $12.7 billion in FHA and VA loans, representing about 11 percent of the combined market for those loans. However, the Federal Housing Finance Board, which oversees the FHLBanks, became concerned because the program was intended to focus on conventional rather than FHA loans. The board took measures to encourage the FHLBanks to limit their purchase of FHA loans to no more than one-third of their mortgage purchase program portfolio. After 2000, FHLBanks greatly reduced their purchases of FHA loans. From 2001 to 2003, they purchased loans representing about 4 percent to 5 percent of the FHA market, which then declined further to about 2 percent in 2004. In fiscal year 2004, Fannie Mae purchased 4 percent of all FHA and VA originations. Its share of FHA and VA originations has varied over time, ranging from 1 percent to 6 percent between 1990 and 2004, or just 0.3 percent to 3 percent of Fannie Mae’s total purchase activity. According to Fannie Mae officials, these purchases of government loans consist largely of repurchases of delinquent loans. A Fannie Mae official told us the company did not systematically purchase FHA loans and in its normal course of business did not consider itself a competitor with Ginnie Mae. Fannie Mae does not receive credit from HUD toward its affordable housing goals by purchasing government-backed loans. Freddie Mac has purchased less than 1 percent of the market of FHA and VA loans each year since 1990. Freddie Mac officials said that its competition with Ginnie Mae is largely indirect, by encouraging conventional lending to the most creditworthy low- and moderate- income borrowers who might otherwise receive a mortgage through FHA or VA. Freddie Mac officials also said they do not compete with Ginnie Mae in the secondary market directly because it is hard to compete with Ginnie Mae’s government guarantee. In addition, as with Fannie Mae, government-backed loans do not count toward Freddie Mac’s required affordable housing goals. Freddie Mac does purchase some mortgage revenue bonds that are collateralized by FHA and VA loans and directly purchases some FHA and VA loans that Ginnie Mae does not securitize. State and local government entities, including housing finance agencies, issue mortgage revenue bonds to raise funds in the capital markets for mortgage lending. Because these bonds are tax exempt, investors are willing to accept a lower interest rate for them. This interest savings is passed on through lenders to lower-income families in the form of loans with interest rates below the market average. These bonds often finance government-backed mortgages. As of 2003, 71 percent of the mortgages that revenue bonds financed were insured or guaranteed by a federal program—58 percent by FHA, 10 percent by RHS, and 3 percent by VA. The overall volume of mortgage revenue bonds issued was $10.7 billion in 2003. Private label issuers purchased an estimated 3 percent of FHA and VA loans in 2004. These issuers account for an increasingly large share of the overall MBS market, but most of their market consists of loans not offered by FHA and VA programs, such as jumbo nonconforming loans and home equity lines of credit. According to RHS officials, private label issuers do currently securitize the majority of Section 538 multifamily loans guaranteed by RHS, but these loans account for less than 1 percent of Ginnie Mae’s portfolio. Most of the competition for Ginnie Mae’s market share does not come directly—that is, secondary market participants are not seeking to purchase or securitize significant numbers of government-backed loans. Rather, lenders compete with Ginnie Mae indirectly by seeking greater market share at the origination level by making conventional loans to borrowers who might otherwise use FHA and VA loan programs. Fannie Mae and Freddie Mac have an incentive to serve this market because lower-income borrowers who might otherwise turn to a government- backed loan program can help them meet their housing goals established by HUD. In addition, subprime mortgage originations have grown dramatically in recent years, as many lenders market to less creditworthy borrowers who in the past may have received a government-backed loan. Although Ginnie Mae continues to finance the bulk of government-backed loans, its share of the overall MBS market has declined substantially over the past 20 years. As shown in figure 4, Ginnie Mae securities represented 42 percent of all new MBS issued in 1985, but only 7 percent in 2004. This drop in market share of new issuance is due not to a significant decline in Ginnie Mae’s MBS issuance, but rather to rapid growth in the rest of the market—Fannie Mae, Freddie Mac, and private label issuers, which we refer to as the “conventional” market for MBS. In 1985, Ginnie Mae MBS issuance was $46 billion, while the conventional market issued $64 billion. By 2004, Ginnie Mae issuance had grown to $127 billion, but issuance of conventional MBS had grown to $1.8 trillion. MBS issuance has risen among all segments of the conventional market. The rise in private label MBS issuance has been particularly steep in the last few years, rising from $136 billion in 2000 to $864 billion in 2004. Two factors have spurred the growth of the conventional MBS market: the increasing number of conventional mortgage originations and the growing proportion of these mortgages that are securitized. Mortgage lending in the conventional market has grown much more rapidly over the last 20 years than lending through FHA and VA programs. Conventional mortgage originations rose from an estimated $243 billion in 1985 to an estimated $2.8 trillion in 2004. In contrast, originations of FHA and VA loans rose from $42 billion to $129 billion during that period. In addition, the rate of securitization of conventional mortgages has risen rapidly over the last 20 years; by the end of 2004, almost half of outstanding mortgage debt was financed through securitization, according to the Bond Market Association. Ginnie Mae’s market share of outstanding MBS has also declined significantly over the last 20 years, falling from 54 percent in 1985 to 10 percent in 2004. Since 2000, Ginnie Mae’s volume of MBS outstanding has fallen from $612 billion to $453 billion in 2004, a drop of approximately 26 percent. The primary factor contributing to this decline has been the increase in borrowers who have refinanced out of FHA and VA loan programs into conventional loans. Falling interest rates and rising home prices have led to a boom in refinancing over the last 10 years, particularly from 1997 to 1999 and 2001 to 2004. At the peak of the refinancing boom in 2003, refinancings represented about 65 percent of mortgage originations. As some borrowers with mortgages insured by FHA and guaranteed by VA have built up equity in their homes, they have been able to refinance out of these programs into conventional loans that may offer more favorable and flexible terms and interest rates. This trend may have been facilitated to some extent by the increased availability of loans to borrowers who are less creditworthy. This has allowed some borrowers who would not otherwise have been able to borrow in the conventional market to do so rather than using FHA-insured and VA-guaranteed mortgage programs. The decline in the outstanding volume of FHA and VA loans has led to a corresponding decline in the outstanding volume of Ginnie Mae securities, which are mostly composed of those loans. To a lesser extent, lender repurchases of delinquent FHA-insured and VA-guaranteed loans in Ginnie Mae pools have also contributed to the decline in Ginnie Mae’s volume of outstanding MBS. Ginnie Mae’s policy prior to 2003 allowed lenders and servicers to repurchase loans that were in their Ginnie Mae pools if the borrower missed just one payment that remained unpaid for 4 consecutive months. According to Ginnie Mae, these loans often had a low risk of default; the loan may have had only one missed payment followed by resumption of loan servicing by the borrower. However, lenders were able to profit by repurchasing these loans for the remaining balance because, during an era of falling interest rates, the market value of the loans was more than the remaining balance. Data obtained from Ginnie Mae officials show that these repurchases of delinquent loans reached a peak in 2002, when they totaled $22 billion, and that they contributed to the decline in Ginnie Mae’s outstanding volume. To address this problem, Ginnie Mae announced a revision to its loan repurchase policy in November 2002. Under the new policy, for pools issued on or after January 1, 2003, servicers can repurchase delinquent loans only when no payment has been made for 3 consecutive months. Ginnie Mae officials as well as issuers we talked with said that these new policies appear to have curtailed repurchase activity. Ginnie Mae’s share of the government-backed mortgage market has been fairly constant. If other secondary market players substantially increased their market share of government-backed mortgages, borrowers would be unlikely to see higher interest rates or tighter credit immediately, because such players would need to offer products that were competitive with Ginnie Mae’s. However, a decline in the proportion of high-quality mortgages included in Ginnie Mae’s MBS could lower their overall credit quality, potentially raising the cost of servicing the underlying mortgages and thus interest rates paid by borrowers. In addition, any decline in the volume of Ginnie Mae’s MBS could potentially reduce their liquidity, although it is unclear whether reduced liquidity is likely to be a significant concern in the foreseeable future. Finally, declines in Ginnie Mae’s outstanding volume would reduce its fee revenue from its MBS programs. Because Ginnie Mae’s program income exceeds its expenses, a drop in income could affect its contribution to reducing the federal budget deficit. As noted earlier, Ginnie Mae has consistently guaranteed MBS for the great majority of FHA and VA loans, but its share of the total MBS market has declined significantly since 1985. Borrowers of government-backed loan programs have benefited from the Ginnie Mae guarantee because it helps make such loans more accessible and keep borrowers’ interest rates down. New issuance of Ginnie Mae MBS has remained fairly constant, generally ranging from $150 billion to $200 billion annually from 1998 to 2004. Ginnie Mae’s share of the MBS market for government-backed loans would likely decline only if other secondary market players such as the Federal Home Loan Banks, Fannie Mae, Freddie Mac, state and local government entities, and private label issuers chose to become more active in the securitization of these loans. In such a scenario, interest rates would probably not rise or credit tighten for borrowers because such players would need to offer products that were competitive with Ginnie Mae’s, thus benefiting borrowers to a similar degree. As noted earlier, however, such a scenario is unlikely in the near future, as other secondary market participants generally appear to have chosen not to directly compete with Ginnie Mae because of the government guarantee. As we have seen, Ginnie Mae’s outstanding volume of MBS has declined in recent years because the outstanding volume of FHA and VA loans has fallen as growing numbers of borrowers refinance in the conventional market. However, those FHA and VA borrowers who are able to take advantage of refinancing options are generally the most creditworthy of the programs’ borrowers. The result has been a decline in the overall credit quality of FHA and VA loans in recent years indicated by increased default and foreclosure rates in government mortgage insurance and guarantee programs. As a result, the loan quality underlying Ginnie Mae’s securities has declined. Thus far, investors have not been directly affected by this development because of the government guarantee. However, the cost of servicing the government-backed loans in Ginnie Mae’s pools could rise in such a scenario, since managing delinquencies and the foreclosure process is the most costly component of servicing. According to Ginnie Mae, the servicing fees issuers are allowed to charge are sufficient to cover any significant increase in servicing costs resulting from declines in credit quality. However, increased servicing costs could result in smaller profits for Ginnie Mae issuers, potentially reducing lenders’ willingness to make government-backed loans and increasing borrowers’ interest rates. In addition, any increase in prepayment rates due to borrower defaults could reduce the price investors are willing to pay for Ginnie Mae MBS, which could also act to raise interest rates for borrowers. A market is said to be liquid if the instruments it trades can be bought by investors or sold in the markets quickly and easily with little impact on market prices. Liquid assets have relatively lower yields and higher prices than illiquid assets. One key factor affecting the liquidity of MBS is the size of the market in which they are traded—all other things being equal, larger markets are generally more liquid than smaller markets. In addition, standardized pools—that is, pools of mortgages with similar interest rates and terms—are generally more liquid than pools of mixed mortgage products, which cannot be traded as readily because they are more difficult to value and thus riskier. For this reason, Ginnie Mae I securities are more liquid than Ginnie Mae II securities (whose pools consist of loans with more variability). Market participants we spoke with provided mixed opinions about the current liquidity of Ginnie Mae securities. Some dealers said that Ginnie Mae securities were quite liquid and traded easily, while others noted that they were less liquid than other MBS, such as those issued by Fannie Mae and Freddie Mac. One institutional investor told us that Ginnie Mae securities that are traded in smaller volumes—such as those backed by hybrid ARMs—could face liquidity issues. Another noted that the liquidity of Ginnie Mae securities could be a concern for very large trades, such as those of more than $1 billion. Any reduced liquidity resulting from a continued decline in Ginnie Mae’s market share could have some effect on the costs to borrowers of government-backed loans. However, it is not clear how significant the decline would have to be before liquidity became a significant concern that materially affected the pricing of Ginnie Mae securities and thus interest rates for borrowers of government-backed loans. Ginnie Mae officials told us that their securities had at least adequate liquidity. They noted, for example, that the bid-ask spread on Ginnie Mae securities was comparable with the spread for Fannie Mae securities, one indication that liquidity is not currently an issue. The officials said that if volume continued to decline, liquidity could become a significant concern in the future, although it is unknown at what levels of volume this would occur. Revenues from Ginnie Mae’s MBS guarantee programs exceed the cost of operating them. Since fiscal year 1985, the agency has not had to borrow from the U.S. government to finance its operations and its excess funds go into a receipt account held as capital reserves. As shown in table 1, in fiscal year 2004 Ginnie Mae had total revenues of $815.5 million and expenses of $77.8 million. The excess of its revenues over expenses, net of interest income, is invested in U.S. government securities and reduces the amount that the Treasury must borrow from the public to finance government programs—that is, it reduces the deficit. In fiscal year 2004, this amount was $295 million. Most of Ginnie Mae’s revenue comes from MBS program income, which totaled $372.8 million in fiscal year 2004. Ginnie Mae charges issuers a guarantee fee that is based on the aggregate principal balance of an issuer’s outstanding MBS, and collects commitment fees for the authority to pool mortgages into Ginnie Mae MBS. Ginnie Mae’s program income allows it to cover the expenses it incurs in carrying out its programs and initiatives, including the cost of hiring contractors, paying staff salaries and benefits, printing, and performing other administrative functions. Ginnie Mae also incurs credit-related expenses—for example, it must maintain reserves against losses and issuer defaults in order to ensure a ready source of funds to meet its guarantee of timely payment. At the end of fiscal year 2004, Ginnie Mae had reserves of about $10.4 billion. Ginnie Mae’s fee income is based on the principal balance of its securities portfolio, so the agency’s revenues largely depend on the volume of its outstanding securities. As we have seen, Ginnie Mae’s share of the MBS market has declined in the last 20 years. In fiscal years 2000 through 2004, Ginnie Mae’s principal balance outstanding also declined, falling from $603.4 billion to $453.4 billion and reducing program income from $408.2 million to $372.8 million (see fig. 5). As a result, during that period, the agency’s excess of revenues over expenses (net of interest), which reduces the federal budget deficit, declined from $347 million to $295 million. Ginnie Mae’s program income continues to exceed its expenses and, according to Ginnie Mae officials, is likely to do so for the foreseeable future. However, if its outstanding volume continued to decline, program income and excess revenues, which reduce the federal budget deficit, could also be expected to continue falling. Ginnie Mae faces challenges in a number of areas. First, it must respond to changes in the marketplace and meet the needs of its stakeholders. To meet this challenge, the agency has expanded its product offerings and taken other initiatives to maintain its viability. Second, Ginnie Mae must adequately disclose loan information that MBS investors need to assess prepayment risk. The agency has recently improved this disclosure, though these improvements are not yet complete. Third, Ginnie Mae must work within the limits of its commitment authority. In 1999, it instituted procedures to ration its commitment authority when the agency faced the possibility of reaching the limit of its authority by year’s end. To help prevent the problem from recurring, Congress changed Ginnie Mae’s commitment authority cycle from 1 year to 2 years and could consider further steps. Fourth, inconsistencies and inaccuracies exist in some aspects of Ginnie Mae’s data systems, although measures to improve these systems are under way. Finally, given Ginnie Mae’s small staff and reliance on contractors, the agency faces the challenge of ensuring that its capacity to plan, manage, and oversee contractors is adequate. Ginnie Mae has faced and continues to face the challenge of fulfilling its mission of supporting government-backed loan programs in a changing market environment. Among the significant market changes over the last 20 years have been the growing availability of private mortgage insurance and subprime loans, rapid development of the conventional secondary mortgage market, alterations in the volume and characteristics of government-backed loan programs, and the proliferation of new mortgage loan products, such as hybrid ARMs. Ginnie Mae recently completed or has under way several initiatives that are likely to help respond to the needs of its stakeholders in a changing marketplace, although additional efforts may be needed in some areas. Among the steps Ginnie Mae has taken are the following: As part of its Business Improvement Initiative, in October 2004 Ginnie Mae began a formal process of soliciting recommendations from business partners and other stakeholders to improve its MBS and Multiclass Securities programs. In March 2005, the agency publicly released the suggestions it had received, including, among others, changing technological processes and developing new securitization products. Ginnie Mae officials say they are currently in the process of evaluating the suggestions. Ginnie Mae played a role in developing FHA’s hybrid ARM products. Ginnie Mae and FHA officials say that they worked together to encourage Congress to permit FHA to insure hybrid ARMs, in large part because the agency wanted to remain competitive with conventional markets, in which such products had become increasingly popular. Ginnie Mae developed a securitization program, as Ginnie Mae II securities, for these products, and in 2004 FHA began offering 3-, 5-, 7-, and 10-year hybrid ARM products in addition to its standard 1-year ARM. In February 2005, Ginnie Mae began guaranteeing securities backed by RHS multifamily loans, which support affordable multifamily housing in rural areas. RHS officials told us that this created the first consistent secondary market for these loans and that Ginnie Mae’s involvement would increase access to these loans and would lower borrower costs by increasing lenders’ liquidity. The officials also noted that Ginnie Mae had actively supported RHS by ensuring that the multifamily loan program could be securitized as Ginnie Mae I securities. The Ginnie Mae II Program was created to provide issuers and investors with more flexibility in pooling different kinds of loans—such as adjustable rate mortgages—into Ginnie Mae securities. By their nature, Ginnie Mae II securities are less homogeneous than Ginnie Mae I securities. As a result, they are considered less predictable and investors demand a higher yield from these securities. In 2003, the Ginnie Mae II product was restructured to make it more competitive. Among other changes, the agency narrowed the spread on the note rates that could be included in the pools, so that the loans backing the securities would be more homogenous. In addition, the range of servicing fees that issuers could charge was widened to provide more flexibility. As a result, Ginnie Mae says there is now a smaller gap in pricing between Ginnie Mae I and Ginnie Mae II securities. But one broker-dealer we spoke with complained that to ensure sufficient loan volume for a Ginnie Mae II pool, issuers sometimes must include mortgages that would otherwise qualify for a Ginnie Mae I. In July 2004, Ginnie Mae expanded its Targeted Lending Initiative, which was created to provide financial incentives for lenders to increase loan volumes and raise homeownership levels in underserved areas. Under the program, which began in 1996, Ginnie Mae reduced its guarantee fee by up to 50 percent for approved issuers that originate or purchase eligible loans in designated communities and place them in Ginnie Mae pools. The expansion brought additional areas into the program, including “colonias” along the Southwest border region and additional Renewal Communities and Urban Enterprise Zones designated by HUD. In September 2005, Ginnie Mae announced it was temporarily expanding the Targeted Lending Initiative further to include counties in the states of Alabama, Louisiana, and Mississippi that were declared federal disaster areas as a result of Hurricane Katrina. Ginnie Mae still faces certain barriers to financing government-backed loan programs. For example, VA and Ginnie Mae officials have expressed concern that recently enacted changes in the law authorizing certain hybrid ARM products in VA’s loan guarantee program did not address a limitation that has made these products difficult to securitize. Although the Veterans Benefits Act of 2004 made certain modifications to the program’s provisions for adjusting interest rates for VA’s 5-, 7-, and 10-year hybrid ARM products, the act continued a restriction on annual rate adjustments (those made after the initial rate adjustment) to a maximum increase or decrease of 1 percentage point. While this restriction may benefit borrowers by limiting interest rate increases, Ginnie Mae and VA officials said that a 1 percentage point annual cap was inadequate to attract interest from investors who purchased such products. Further, the terms of VA’s hybrid ARM products are no longer the same as the corresponding hybrid ARMs offered by FHA, bifurcating the market and making securities containing these types of loans less liquid. According to Ginnie Mae, this lack of liquidity results in higher interest rates for veterans and nonveterans alike. VA officials said that the capital markets and Ginnie Mae may not have been sufficiently consulted on this adjustment during the legislative process to ensure that provisions in the VA hybrid ARM program were consistent with the requirements of Ginnie Mae and conventional secondary markets. A similar situation occurred with respect to an FHA single-family insured ARM product. The fiscal year 2002 VA/HUD appropriations bill limited annual interest rate adjustments on FHA’s hybrid ARMs to 1 percentage point if the initial interest rate term was fixed for 5 years or less and imposed a lifetime cap of 5 percentage points. These caps were intended to assist FHA borrowers, but lenders and capital market participants expressed concern that Ginnie Mae securities backed by these ARMs would be unattractive to investors—and thus lenders—since equivalent products in the conventional market typically included annual caps of 2 percent and lifetime caps of 6 percent. In response, an amendment to the authorizing legislation, enacted in December 2003, made the annual cap applicable only to loans having a fixed term for the first three or fewer years—a change that FHA said was needed to meet the needs of home buyers, lenders, and the secondary mortgage market. Following the 2003 amendment, FHA issued an interim final rule in March 2005 that raised the cap on adjustments to annual interest rates for 5-year ARMs from 1 to 2 percentage points and raised the lifetime cap on interest rate adjustments for those loans to 6 percentage points. Ginnie Mae officials noted that these problems could have been avoided had Congress initially consulted more closely with capital market participants. Investors in Ginnie Mae securities do not face credit risk, since the mortgages underlying these securities are federally insured or guaranteed and because Ginnie Mae guarantees timely payment of principal and interest. However, MBS investors do face prepayment risk, because they are purchasing cash flows that can stop when borrowers pay their loans in full early. Mortgage loans are prepaid for several reasons, most commonly when the house is refinanced, sold, or destroyed, or when the borrower goes into foreclosure. Prepayment rates tend to increase in periods of declining interest rates, when borrowers have the opportunity to lower their interest payments by refinancing. When mortgages are prepaid, voluntarily or involuntarily, investors receive their principal, but not further interest payments. In an environment of declining interest rates, prepayments may force investors to reinvest prematurely at a lower interest rate and to incur transaction costs. Historically, the rate of prepayment for Ginnie Mae securities has been lower than for other MBS because borrowers of government-backed mortgages are generally first-time or low- to moderate-income home buyers who are less likely to be able to incur the cost of refinancing or relocating. According to research by securities trading firms, between 1980 and 1990 Ginnie Mae securities consistently prepaid at lower rates than their conventional counterparts. However, since that time, prepayment rates for conventional MBS have changed relative to those for Ginnie Mae MBS. Since 1990, Ginnie Mae’s prepayment rates have been slower than those of their conventional equivalents in the initial 18 months to 2 years after loan origination. But after this initial period, as the loans seasoned, Ginnie Mae’s prepayment rates have generally risen compared with conventional MBS. Ginnie Mae securities backed by seasoned loans are currently prepaying at a much faster rate than did similar securities during the 1990s. Three factors in particular seem to have influenced the increase in Ginnie Mae’s rate of prepayment—refinancings, delinquencies, and repurchases. As explained earlier, expanded access to credit, rising home prices, and falling interest rates have allowed more FHA and VA borrowers to refinance into conventional loans. With the added equity built up in their homes, borrowers have been able to reduce their monthly costs by refinancing without paying the federal programs’ insurance premiums. In addition, delinquency and default rates for FHA and VA loans—which have traditionally been higher than those for conventional loans—have been steadily increasing in recent years. The delinquency rate on all FHA mortgages increased from 6.7 percent in 1990 to 12.2 percent in 2004. By contrast, the delinquency rate for conventional mortgages has remained relatively stable and stood at 1.6 percent in 2003. Finally, as noted earlier, before July 2003 Ginnie Mae’s policy allowed loan servicers to repurchase loans from Ginnie Mae’s pools if a borrower missed only one payment and left it unpaid for 4 months. These repurchases, which peaked in 2002, caused a temporary acceleration in the prepayment rates of Ginnie Mae’s MBS. Market participants we met with expressed concerns about the accelerated rate of prepayment on Ginnie Mae securities in recent years. Institutional investors often employ complex models—which rely in part on detailed information about the underlying loan pools—to forecast prepayment rates and help price MBS. Investors we spoke with noted that predicting prepayment risk on Ginnie Mae securities had become increasingly difficult because of rapid shifts in the marketplace, such as the expansion in the availability of conventional credit and increases in FHA and VA delinquencies, and uncertainty about future developments. In the past, the securities industry has also expressed concerns that developing models to predict prepayment of Ginnie Mae MBS has been particularly difficult because Ginnie Mae has not always provided the same degree of detail on its loans as conventional securitizers. In written comments to Ginnie Mae, the Bond Market Association—a trade association representing securities dealers—said that while Ginnie Mae had begun providing more information than ever before about the mortgages backing its securities, there was still “significant room for improvement.” One broker-dealer noted to us that information was particularly lacking on hybrid ARM products in Ginnie Mae pools. A second broker-dealer said that additional information on geography and occupancy rates for multifamily loans would help better estimate the risk of delinquency—and thus prepayment—of securities backing those loans. Market participants also noted that having information on borrower credit scores would be useful. To address concerns about its disclosures, in January 2004 Ginnie Mae began its MBS Disclosure Initiative, which was designed to provide investors with additional information that would allow them to better forecast prepayment rates. Prior to the initiative, Ginnie Mae’s disclosures on the loans underlying its securities included such things as the weighted average age of the loan, the number of loans in the pool, the unpaid principal balance, and the average original loan size. With the initiative, the agency began providing expanded disclosures—at issuance—of loan data that it was already collecting and began disclosing new data items about FHA and VA single-family loan pools, including original loan-to-value ratios, loan purpose, property type, average original loan size, and year of origination. In addition, in September 2004 Ginnie Mae began updating its MBS disclosures every month instead of quarterly. Ginnie Mae said that in December 2005 it would begin disclosing additional details on the reasons for prepayments of the loans backing Ginnie Mae MBS, including the number of loans that were paid off in full by borrowers, repurchased by issuers because of delinquency, and liquidated due to foreclosure. Ginnie Mae officials told us that the recent changes made disclosures on Ginnie Mae securities comparable with those for Fannie Mae’s and Freddie Mac’s. In developing its annual budget, Ginnie Mae officials told us they must estimate the amount of the agency’s commitment authority—the limit on the total dollar volume of securities that the agency can guarantee. The Office of Management and Budget reviews Ginnie Mae’s commitment authority estimates before they are finalized and included in the President’s budget request to Congress. Ginnie Mae estimates the amount of the commitment authority it will need for future years based on the actual authority used by the federal guarantee programs it served in the previous year. The agency also considers commitment authority allocations it actually made to issuers in the previous year and includes them as part of the estimate, adding an additional percentage to that estimate to cover unanticipated events in the marketplace. The Secretary of HUD is required by statute to notify Congress when Ginnie Mae has utilized 75 percent of its commitment authority and when HUD estimates that the agency will exhaust this authority before the end of a fiscal year. If Ginnie Mae exhausts the limit placed on its commitment authority, it must suspend issuance of new MBS until Congress provides additional authority. Under these circumstances, an issuer may either have its request returned or leave it with Ginnie Mae to be processed on a first-come, first-served basis after additional commitment authority is restored. In 1999, fearing it would reach the limit before the end of the year, Ginnie Mae instituted procedures to ration its commitment authority. It temporarily limited the approval of commitment requests to the amount estimated to cover issuer needs for no more than a 60-day period. According to industry participants we spoke with, this step was disruptive to lenders and issuers and caused concern that Ginnie Mae would not have the authority it needed to honor commitments it had already made. One trade association told us that that this situation had resulted in some loss of credibility for Ginnie Mae. According to Ginnie Mae, the agency had not adequately estimated the demand for its guarantee in 1999, in part because of unexpectedly high levels of new construction and mortgage refinancing activity that year. Since that time, the agency has taken steps to help ensure that it is no longer in danger of reaching the limit of its commitment authority. Since 2002, the commitment authority Ginnie Mae has received as part of HUD’s annual appropriations is available for 2 years. Congress annually provides commitment authority but the authority is available for two years. This means Ginnie Mae can use “carryover” authority from the prior year to make current year commitments. According to agency officials, this change from a 1- to a 2-year cycle has given Ginnie Mae more flexibility in planning how to use its commitment authority and should reduce the need to ration it again in the future. In addition, the actual commitment authority available to Ginnie Mae at any given time may be above the additional amount authorized annually, because since fiscal year 2002, the agency has carried over unused authority from the prior year. Thus, as shown in figure 6, although Ginnie Mae’s new commitment authority limit has been $200 billion each year since fiscal year 1999, the actual authority available for Ginnie Mae to use has been higher beginning in 2002. In fact, in fiscal year 2003, Ginnie Mae was able to meet program demands. Having the ability to rely on unused authority carried over from prior years has meant that the agency has not had to ration or suspend issuer commitments since 1999. Thus, if Ginnie Mae exceeds its annual commitment limit, for a particular year, it has the authority to do so but only to the extent of its carryover authority. However, given uncertainty of demand in the marketplace, carryover authority still may not be enough. Federal agencies often face difficulties estimating potential demand for loan guarantees, in part because the budget process requires them to forecast demand nearly 2 years in advance. Our 2005 report on the FHA and RHS loan guarantee programs discussed options that Congress could consider to prevent suspensions of those programs related to exhaustion of their commitment authority. Some of the options discussed in that report could be applicable to Ginnie Mae. For example, Congress could establish a higher limit on Ginnie Mae’s commitment authority, although such a step could increase the government’s exposure to risk. Congress could also require Ginnie Mae to provide more frequent updates on the amount of commitment authority it has used. This would involve little additional administrative burden and would provide additional and timelier information for determining whether to provide supplemental commitment authority before the end of a fiscal year. Because both of these options could have various implications, their specific impacts would depend on how the changes were structured and implemented. In November 2002, officials of First Beneficial Mortgage Corporation, one of Ginnie Mae’s approved issuers, were convicted of engaging in fraudulent pooling practices. According to information from HUD’s Office of the Inspector General (OIG) the company used forged documents to pool loans that were collateralized with nonexistent properties and that were not insured or guaranteed by a federal agency, as required of Ginnie Mae securities. Ginnie Mae declared First Beneficial in default and incurred a loss of approximately $20 million. HUD’s OIG, among others, investigated the First Beneficial case and subsequently audited Ginnie Mae’s internal controls, completing its report in March 2003. The investigation and audit identified inconsistencies and inaccuracies in Ginnie Mae’s data systems and other internal control weaknesses. Most notably, the OIG found that Ginnie Mae, its issuers, and the agencies it serves did not all use a single common and unique case number as the primary management control for identifying and tracking loans in the MBS pools. Instead, each entity assigned its own tracking number, making comparisons of loan data difficult and hindering efforts to ensure that the loans in Ginnie Mae’s pools were federally insured or guaranteed. The OIG’s report also found that Ginnie Mae did not have adequate controls in place to ensure the reliability of its data—for example, it could not ensure the accuracy of its data entry procedures, had not sufficiently verified all loans to ensure they were federally insured or guaranteed, and did not make sure that all issuers were in fact eligible to issue Ginnie Mae securities. As a result, Ginnie Mae potentially could not identify ineligible loans in its pools. Ginnie Mae has taken several measures to address many of the internal control and data weaknesses identified in the HUD OIG’s reports. For example, the agency has developed and implemented policies, controls, and training designed to make data entry more accurate and is working to better integrate its multiple data systems. Further, 99 percent of Ginnie Mae’s portfolio is made up of loans backed by FHA and VA, and the agency now matches the loans in its data systems against those in FHA’s and VA’s databases. However, Ginnie Mae, FHA, and VA still do not use the same case numbers, which would eliminate the need for time-consuming matching. Ginnie Mae officials told us that they are analyzing aligning case numbers as part of an ongoing Business Process Improvement Initiative. However, such a change would be difficult because it would require systems changes for both Ginnie Mae and its issuers. OIG officials told us that Ginnie Mae had largely addressed the deficiencies they had observed in the loan data and that that the OIG was generally satisfied with the agency’s efforts to address internal control weaknesses. However, we identified additional data integrity issues during our review. For example, Ginnie Mae was initially unable to provide us with a breakdown of loans in its portfolio—that is, percentages of FHA, VA, RHS, and PIH loans. This basic data could not be provided, the agency said, because a programming error had resulted in the underreporting of FHA loans and the overreporting of VA loans. Ginnie Mae officials acknowledged that their data systems should be improved and that they do not have easy access to as much of their information as they should. Ginnie Mae operates with a small staff—in fiscal year 2004, the agency had about 66 employees—and contracts out most of its transactional and support work. Ginnie Mae has stated that this centralized management model is designed to allow a relatively small group of agency employees to manage a large number of outsourced projects, improving the quality, timeliness, and consistency of their work. In fiscal year 2004, approximately 81 percent of Ginnie Mae’s activities were contracted out, including key operations such as accounting and technical support, Ginnie Mae servicing of defaulted loans, internal control reviews, preparation of assessment rating tools, issuer compliance reviews, and information systems management. Concerns about Ginnie Mae’s oversight of its contractors have existed for several years. Our 1993 review of Ginnie Mae’s staffing found that the agency was not adequately monitoring its contractors’ activities. At that time, the largest contractor told us the agency did not have the resources to adequately review its contractors’ work, and Ginnie Mae itself acknowledged that it did not. Similarly, in a 1997 review of HUD’s contracting activity, HUD’s OIG found that Ginnie Mae was not in compliance with contracting and procurement procedures. The review found that in some instances Ginnie Mae contractors were performing tasks that were inherently governmental functions and that aspects of the bidding process hindered competition. At that time, Ginnie Mae had its own contracting officer; however, as of January 1999, Ginnie Mae began using HUD’s contracting officer and its staff to award contracts. Internal control issues continue to be a potential concern at Ginnie Mae, as evidenced by losses due to fraud in the First Beneficial case, the HUD OIG’s 2003 report, and our own findings of problems with some aspects of the agency’s management information systems. Because Ginnie Mae has a small staff and contracts out most of its operations, appropriate contract management and oversight are inherently key components in improving the agency’s data systems and internal controls. Unlike the time of the 1997 OIG report, Ginnie Mae’s contracting staff are now supplemented by assistance from HUD’s contracting staff. In addition, the agency has initiatives under way to improve its information technology infrastructure and to streamline its business processes, some of which involve contract management. For example, Ginnie Mae officials told us that in 2002 the agency created the Procurement Management Division to more stringently oversee existing contracting and procurement procedures and to provide additional training for staff in contract planning and development. In addition, Ginnie Mae officials say they have built incentives into their performance rating system to increase staff accountability for contract planning and oversight and to provide incentives designed to foster effective contract planning and monitoring. Ginnie Mae’s staff of about 66 are responsible for performing inherently governmental functions and for overseeing the contractors that perform most of the agency’s operations. Based on a 2004 HUD resource management study that found that Ginnie Mae had sufficient staff to perform contract administration functions, Ginnie Mae officials told us they believe that their staffing levels are adequate. But given its reliance on contractors, Ginnie Mae should continue to focus on ensuring that staff have the training, qualifications, and capabilities they need to ensure that contracts are planned, monitored, and executed appropriately. Despite its declining share of the overall MBS market, Ginnie Mae continues to serve its key public policy goal of providing a strong secondary market outlet for federally insured and guaranteed housing programs, helping to improve their access and affordability for low- to moderate-income borrowers. The decline in Ginnie Mae’s share of the overall MBS market should not necessarily be a major source of concern, since it is largely a function of the rapid growth in the conventional MBS market. Unlike firms in the conventional market, however, Ginnie Mae has relatively little control over the volume of its securities, which depends on the volume of FHA and VA loan programs. Changes in the volume and market share of government-backed housing loans are largely the result of policies and decisions made by Congress and the agencies themselves. Improvements to Ginnie Mae’s product line benefit government-backed loan programs by making them more liquid, but the impact on these programs’ volume is relatively marginal. A further decline in Ginnie Mae’s volume could have certain implications related to credit quality, liquidity, and the agency’s contribution to offsetting the federal budget deficit. But just how much Ginnie Mae’s volume could decline in the near future is unclear, as is the magnitude of any potential effects on the market or federal budget. Ginnie Mae faces the challenge of adjusting its product mix and policies to address changes in the marketplace while continuing to meet the needs of both borrowers who rely on affordable housing programs and of industry stakeholders such as issuers and investors. Ginnie Mae has added a number of new products over the years, has made a serious effort to solicit feedback from its business partners, and has expanded its disclosures for investors. The agency has also expanded the types of loans that Ginnie Mae securities can finance, and RHS and PIH officials have commended Ginnie Mae’s proactive efforts to assist their loan programs. But some changes remain beyond its scope—for instance, conditions in FHA and VA hybrid ARM products that have limited investor interest. Closer consultation by lawmakers with Ginnie Mae and capital market participants could help ensure that congressionally mandated provisions of loan programs are consistent with Ginnie Mae and conventional secondary market requirements. Ginnie Mae also faces the challenge of avoiding the need to ration its commitment authority, which can cause disruption among secondary market participants and harm Ginnie Mae’s credibility. Beginning in 2002, Congress made the agency’s commitment authority available for 2 years rather than 1 year to provide more flexibility, but Ginnie Mae could again bump up against its commitment level cap in the future. Other options to address this problem include raising Ginnie Mae’s commitment authority or requiring the agency to notify Congress when it appears the agency may reach its cap. Each of these measures could have various implications that would need to be considered. Like any agency, Ginnie Mae faces challenges in managing its internal operations in an efficient and cost-effective manner, and in ensuring that appropriate internal controls are in place. This may be especially challenging for Ginnie Mae because it operates with a small staff of about 66 and contracts out most of its operations. Certain weaknesses in Ginnie Mae’s data integrity, along with losses resulting from fraudulent activity in the First Beneficial case, indicate the need for continued improvements in data systems and internal controls. Ginnie Mae has taken some important steps on these issues and has ongoing initiatives, such as its Business Process Improvement Plan. However, given certain data integrity issues we identified, the recency of the First Beneficial case, and that Ginnie Mae’s business plan was only recently approved, it is too early to assess the results of Ginnie Mae’s recent efforts. Finally, given its reliance on contractors to carry out most of its operations, Ginnie Mae will need to pay particular attention to ensuring that its staff have sufficient resources, training, and qualifications to ensure that the agency’s contracts are planned, monitored, and executed appropriately. On behalf of HUD, Ginnie Mae provided written comments on a draft of this report, which are reprinted in appendix II. Ginnie Mae agreed with the report’s analysis of the challenges it faces and with the report’s findings on initiatives Ginnie Mae has taken to address these challenges. It also agreed with our observations related to the importance of improving Ginnie Mae’s data systems and maintaining effective contract management. In addition, Ginnie Mae provided us with technical comments, which we have incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our report objectives were to evaluate (1) the state of Ginnie Mae’s market share and guarantee volume, (2) the potential implications of changes in Ginnie Mae’s market share and guarantee volume, and (3) challenges Ginnie Mae faces in fulfilling its mission and the steps that have been or could be taken to address these challenges. To assess the state of Ginnie Mae’s market share and guarantee volume, we obtained data on issued and outstanding mortgage-backed securities (MBS) from the agency’s Integrated Pool Management System and Portfolio Analysis Display System, which obtains its source data from Ginnie Mae’s Mortgage-Backed Securities Information System. We tested the reliability of these data by comparing them within the two data systems and with data from the 2005 Mortgage Market Statistical Annual and the Bond Market Association—sources used widely in the industry to analyze MBS activity. We also compared loan data provided by Ginnie Mae with data maintained by the Department of Veterans Affairs (VA), Rural Housing Service (RHS), and the Office of Public and Indian Housing (PIH) within the Department of Housing and Urban Development (HUD). Our initial comparisons showed significant discrepancies between Ginnie Mae’s source data and that of industry sources. Because Ginnie Mae’s MBS issuance and agency loan endorsement do not occur simultaneously, a lag exists between the date that the loan is endorsed and the date Ginnie Mae is recorded as guaranteeing its securitization. Thus, to provide accurate information on Ginnie Mae’s market share and volume for a given point in time, individual loans must be matched to the Ginnie Mae MBS in which they were pooled. When we began our review, no data for VA, RHS, or PIH loans had been matched with their pool, and data for Federal Housing Administration (FHA) loans had been matched only since 2001. At our request, Ginnie Mae completed the matching of FHA data from 1998 to 2004. Our initial comparison of the portion of Ginnie Mae’s MBS portfolio collateralized by each loan program—that is, by FHA, VA, RHS, and PIH— showed discrepancies as well. As previously discussed, Ginnie Mae could provide us only with estimated percentages because a programming error in the system resulted in the underreporting of FHA loans and the overreporting of VA loans. Because of our request, Ginnie Mae noticed the error and corrected it, and we were able to obtain accurate data on the percentage of loans from each program that were used to collateralize Ginnie Mae MBS. With the corrections Ginnie Mae made, we found the data to be reliable for our purposes. To address all of the objectives, we spoke with and gathered relevant documents from secondary market participants, including five Ginnie Mae- approved issuers and five dealers/institutional investors in Ginnie Mae securities. Among other things, we discussed with them their perceptions of Ginnie Mae and its products and their reasons for investing in or issuing Ginnie Mae securities rather than other MBS products. The issuers were judgmentally selected and represented more than 46 percent of the MBS Ginnie Mae issued in 2003. Three of the issuers focused on single-family FHA loans and the remaining two on multifamily and VA loans. Dealers/institutional investors were also judgmentally selected; among them were the largest broker-dealers of Ginnie Mae MBS, Real Estate Mortgage Investment Conduits, and Platinum securities. We also interviewed and obtained documentation from representatives of secondary market participants that may compete with Ginnie Mae, including Fannie Mae, Freddie Mac, the National Council for State Housing Finance Agencies, and the Federal Home Loan Banks of Chicago and Seattle. We also interviewed representatives of and reviewed documents from Ginnie Mae, HUD’s FHA and PIH programs and its Office of the Inspector General (OIG), VA, RHS, and the Federal Housing Finance Board. In addition, we spoke with relevant trade associations, including the Bond Market Association, National Association of Home Builders, Mortgage Bankers Association, and National Association of Realtors. We conducted a literature search and reviewed Ginnie Mae’s legislative history, relevant laws, regulations, budget documents, performance, and annual reports and guidance, and studies and reports by HUD’s OIG and others. We conducted our work in Washington, D.C., and Boston from October 2004 through September 2005 in accordance with generally accepted government auditing standards. In addition to the contact named above, Jason Bromberg, Assistant Director; Heather Atkins; Daniel Blair; Christine Bonham; Diane Brooks; Emily Chalmers; William Chatlos; Carlos Diz; Austin J. Kelly; Marc Molino; Mitchell B. Rachlis; Paul Thompson; and Franklyn Yao made key contributions to this report. | The Government National Mortgage Association, commonly known as Ginnie Mae, is a wholly owned government corporation that guarantees mortgage-backed securities (MBS) backed by pools of federally insured or guaranteed mortgage loans. The agency supports federal housing programs by facilitating the securitization of loans backed by the Federal Housing Administration (FHA), Department of Veterans Affairs (VA), Rural Housing Service, and the Office of Public and Indian Housing within the Department of Housing and Urban Development (HUD). Concerned that Ginnie Mae's share of the overall MBS market has declined significantly, Congress asked us to address (1) the state of Ginnie Mae's market share and guarantee volume, (2) the potential implications of changes in its share and volume, and (3) the challenges Ginnie Mae faces and steps it is taking and could take to address these challenges. Despite its declining share of the overall MBS market, Ginnie Mae continues to serve its key public policy goal of providing a strong secondary market outlet for federally insured and guaranteed housing loans. Ginnie Mae MBS financed more than 90 percent of new FHA-insured and VA-guaranteed loans in fiscal year 2004, and the agency appears to face relatively little competition in this market. Ginnie Mae's total volume has declined in recent years, however, and its share of the overall MBS market has fallen from 42 percent of new securities in 1985 to 7 percent in 2004. This drop is largely the result of the decline in the market share of the FHA and VA loan programs and the concurrent rise in the securitization of non-government-backed mortgages. Further declines in Ginnie Mae's volume could potentially have implications for borrowers, the liquidity of its securities, and federal revenues. For example, Ginnie Mae's securities could become less liquid, although it is unclear at what levels of volume this would occur. In addition, Ginnie Mae's program revenues could decline if its volume decreased. In fiscal year 2004, program revenues exceeded expenses by $295 million, which helped reduce the federal budget deficit. Ginnie Mae faces a number of challenges in responding to changes in the marketplace, meeting stakeholders' needs, and managing its operations, and the agency has been taking steps to address these challenges. For example, it has expanded its product mix to reach more borrowers and has begun disclosing more information on loans underlying its securities to help investors better predict risk. GAO and others have identified opportunities for improvement in Ginnie Mae's data integrity and internal controls. The agency has begun addressing these issues, but it contracts out most of its operations, so ensuring that it has sufficient staff capabilities to plan, monitor, and manage its contracts is essential. |
If done correctly, investments in IT have the potential to make organizations more efficient in fulfilling their missions. For example, we recently reported that Defense officials stated that an IT system supporting military logistics has improved the organization’s performance by providing real-time information about road conditions, construction, incidents, and weather to facilitate rapid deployment of military assets. However, as we have previously reported, investments in federal IT too frequently result in failed projects that incur cost overruns and schedule slippages while contributing little to mission-related outcomes. For example: In January 2011, the Secretary of Homeland Security ended the Secure Border Initiative Network program after obligating more than $1 billion for the program because it did not meet cost-effectiveness and viability standards. Since 2007, we have identified a range of issues and made several recommendations to improve this program. For example, in May 2010, we reported that the final acceptance of the first two deployments had slipped from November 2009 and March 2010 to September 2010 and November 2010, respectively, and that the cost-effectiveness of the system had not been justified. We concluded that DHS had not demonstrated that the considerable time and money being invested to acquire and deploy the program were a wise and prudent use of limited resources. As a result, we recommended that the department (1) limit near-term investment in the program, (2) economically justify any longer-term investment in it, and (3) improve key program management disciplines. This work contributed to the department’s decision to cancel the program. In February 2011, the Office of Personnel Management canceled its Retirement Systems Modernization program after several years of trying to improve the implementation of this investment.the Office of Personnel Management, it spent approximately $231 million on this investment. We issued a series of reports on the According to agency’s efforts to modernize its retirement system and found that the Office of Personnel Management was hindered by weaknesses in several important management disciplines that are essential to successful IT modernization efforts. Accordingly, we made recommendations in areas such as project management, organizational change management, testing, and cost estimating. In May 2008, an Office of Personnel Management official cited the issues that we identified as justification for issuing a stop work order to the system contractor, and the agency subsequently terminated the contract. In December 2012, Defense canceled the Air Force’s Expeditionary Combat Support System after having spent more than a billion dollars and missing multiple milestones, including failure to achieve deployment within 5 years of obligating funds. We issued several reports on this system and found that, among other things, the program was not fully following best practices for developing reliable schedules and cost estimates. Agencies have reported that poor-performing projects have often used a “big bang” approach—that is, projects that are broadly scoped and aim to deliver functionality several years after initiation. For example, in 2009 the Defense Science Board reported that Defense’s acquisition process for IT systems—which was rooted in the “waterfall” development model—was too long, ineffective, and did not accommodate the rapid evolution of IT.The board reported that the average time to deliver an initial program capability for a major IT system acquisition at Defense was over 7 years. Also in 2009, VA’s former chief information officer (CIO) reported that many of its projects exceeded cost estimates by more than 50 percent and missed scheduled completion dates by more than a year. That official concluded that VA needed to make substantial changes to its acquisition process in order to eliminate project failures associated with the “big bang” approach. One approach to reducing the risks from broadly scoped, multiyear projects is to divide investments into smaller parts—a technique long advocated by Congress and OMB. can potentially By following this approach, agencies deliver capabilities to their users more rapidly, giving them more flexibility to respond to changing agency priorities; increase the likelihood that each project will achieve its cost, schedule, and performance goals; obtain additional feedback from users, increasing the probability that each successive increment and project will meet user needs; more easily incorporate emerging technologies; and terminate poorly performing investments with fewer sunk costs. See Clinger-Cohen Act of 1996, Pub. L. No. 104-106 § 5202, 110 Stat. 186, 690 (1996), codified at 41 U.S.C. § 2308; see also 48 C.F.R. § 39.103 (Federal Acquisition Regulation); OMB, Management of Federal Information Resources, Circular No. A-130 Revised. implement them. More recently, in its 2010 IT Reform Plan, OMB called for IT programs to deliver functionality at least every 12 months and complete initial deployment to end users no later than 18 months after the start of the program. In 2011, as part of its budget guidance, OMB first recommended that projects associated with major IT investments deliver functionality every 6 months. OMB’s latest guidance now makes this mandatory; specifically, in 2012, OMB began requiring that functionality be delivered at least every 6 months. Over the last three decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. For example, the Paperwork Reduction Act of 1995 requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. In addition, to assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996. Among other things, the act requires agency heads to appoint CIOs and specifies many of their responsibilities. With regard to IT management, CIOs are responsible for implementing and enforcing applicable governmentwide and agency IT management principles, standards, and guidelines; assuming responsibility and accountability for IT investments; and monitoring the performance of IT programs and advising the agency head whether to continue, modify, or terminate such programs. Additionally, with regard to incremental development, Clinger-Cohen calls for provisions in the Federal Acquisition Regulation that encourage agencies to structure their IT contracts such that the capabilities are delivered in smaller increments. The Federal Acquisition Regulation provisions are also to provide, to the maximum extent practicable, that the increment should be delivered within 18 months of the contract solicitation. As set out in these laws, OMB is to play a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. Within OMB, the Office of E-Government and Information Technology, headed by the Federal CIO, directs the policy and strategic planning of federal IT investments and is responsible for oversight of federal technology spending. In carrying out its responsibilities, OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires federal departments and agencies to provide information related to their IT investments (called exhibit 53s) and capital asset plans and business cases (called exhibit 300s). Exhibit 53. The purpose of the exhibit 53 is to identify all IT investments—both major and nonmajor—and their associated costs within a federal organization. Information included in agency exhibit 53s is designed, in part, to help OMB better understand agencies’ spending on IT investments. Exhibit 300. The purpose of the exhibit 300 is to provide a business case for each major IT investment and to allow OMB to monitor IT investments once they are funded. An IT investment may include one or more projects that are to develop, modernize, enhance, or maintain a single IT asset or group of IT assets with related functionality. Agencies are required to provide information on each major investment’s projects, including cost, schedule, and performance information. For example, in order to measure compliance with its requirement that projects deliver functionality in 6-month cycles, OMB requires agencies to break their projects into activities, and describe when the activities are to deliver functionality. OMB has implemented a series of initiatives to improve the oversight of underperforming investments and more effectively manage IT. These efforts include the following: IT Dashboard. In June 2009, to further improve the transparency into and oversight of agencies’ IT investments, OMB publicly deployed the IT Dashboard. As part of this effort, OMB issued guidance directing federal agencies to report, via the Dashboard, the performance of their IT investments. Currently, the Dashboard publicly displays information on the cost, schedule, and performance of over 700 major federal IT investments at 26 federal agencies. Further, the public display of these data is intended to allow OMB, other oversight bodies, and the general public to hold the government agencies accountable for results and progress. TechStat reviews. In January 2010, the Federal CIO began leading TechStats sessions—face-to-face meetings to terminate or turn around IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and transparency and improve performance. For example, the Federal CIO testified in June 2013 that he holds TechStat meetings on large investments that are not being acquired incrementally. More recently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. In doing so, OMB has called for agencies to use their TechStat processes to identify investments that are not being acquired incrementally and undertake corrective actions. IT Reform Plan. In December 2010, OMB released its 25-point plan to reform federal IT. This document established an ambitious plan for achieving operational efficiencies and effectively managing large- scale IT programs. In particular, as part of its effort to effectively manage IT acquisitions, the plan calls for federal IT programs to deploy functionality in release cycles no longer than 12 months, and ideally, less than 6 months. The plan also identifies key actions that can help agencies implement this incremental development guidance, such as working with Congress to develop IT budget models that align with incremental development, and issuing contracting guidance and templates to support incremental development. In April 2012, we reported on OMB’s efforts to implement the actions called for in its IT Reform Plan and found that it had partially completed work on two key action items relating to incremental development— issuing contracting guidance and templates to support incremental development and working with Congress to create IT budget models that align with incremental development. With respect to the contracting guidance and templates, we found that, although OMB worked with the IT and acquisition community to develop guidance, it had not yet issued this guidance or the templates. Regarding the IT budget models, we found that, although OMB worked to promote ideas for IT budget flexibility (such as multiyear budgets or revolving funds) with congressional committees, there has not yet been any new legislation to create budget models, and OMB has not identified options to increase transparency for programs that would fall under these budgetary flexibilities. We recommended that the Director of OMB ensure that all action items called for in the IT Reform Plan are completed. OMB agreed with this recommendation. OMB has since issued contracting guidance for incremental development,Office of E-Government and Information Technology stated that activities to address the development of new IT budget models are still ongoing. but, as of January 2014, a staff member from the OMB Additionally, in 2011, we identified seven successful investment acquisitions and nine common factors critical to their success. Specifically, we reported that department officials identified seven successful investment acquisitions, in that they best achieved their respective cost, schedule, scope, and performance goals. Notably, all of these were smaller increments, phases, or releases of larger projects. For example, the Defense investment in our sample was the seventh increment of an ongoing investment; the Department of Energy system was the first of two phases; the DHS investment was rolled out to two locations prior to deployment to 37 additional locations; and the Transportation investment had been part of a prototype deployed to four airports. In addition, common factors critical to the success of three or more of the seven investments were: (1) program officials were actively engaged with stakeholders, (2) program staff had the necessary knowledge and skills, (3) senior department and agency executives supported the programs, (4) end users and stakeholders were involved in the development of requirements, (5) end users participated in testing of system functionality prior to formal end user acceptance testing, (6) government and contractor staff were stable and consistent, (7) program staff prioritized requirements, (8) program officials maintained regular communication with the prime contractor, and (9) programs received sufficient funding. These critical factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government, and wide dissemination of these factors could complement OMB’s efforts. Further, in 2012, we identified 32 practices and approaches as effective for applying Agile software development methods to IT projects.Officials from five agencies who had used Agile methods on federal projects cited beneficial practices, such as obtaining stakeholder and customer feedback frequently, managing requirements, and ensuring staff had the proper knowledge and experience. We also identified 14 challenges with adapting and applying Agile in the federal environment, including agencies having difficulty with committing staff to projects, procurement practices that did not support Agile projects, and compliance reviews that were difficult to execute within an iterative time frame. We noted that the effective practices and approaches identified in the report, as well as input from others with broad Agile experience, could help agencies in the initial stages of adopting Agile. Since 2000, OMB Circular A-130 has required agencies to (1) develop policies that require their major investments to deliver functionality incrementally and (2) ensure that investments comply with their policies. In addition, as part of its recent budget guidance, OMB has defined how often investments must deliver functionality. Specifically, each project associated with major IT investments is to deliver functionality at least Further, through the President’s Budget, OMB once every 6 months.has provided additional guidance on how incremental development is to be enforced by requiring agencies to use their TechStat processes to identify investments that are not being acquired incrementally and undertake corrective actions. Although OMB’s guidance requires agencies to develop incremental development policies, it does not specify what those policies are to include. Absent this detail and in reviewing the previously mentioned guidance and leading practices on institutionalizing processes throughout an organization, include in their policies in order to effectively carry out OMB’s incremental development guidance: we identified three components that agencies should require that all projects associated with major IT investments deliver functionality in cycles that are not more than 6-months long; define functionality—that is, what the projects are to deliver at the end of a 6-month cycle; and define a process for ensuring that major IT investments and their projects deliver functionality every 6 months. This should include identifying investments that are not being acquired incrementally through agency TechStat processes and undertaking corrective actions. Although all five selected agencies developed policies that address incremental development, the majority of the agencies’ policies did not fully address all three components. Specifically, only VA fully addressed the three components; Defense partially addressed the majority of them; and HHS, DHS, and Transportation did not address the majority of components. Table 1 provides a detailed assessment of each agency’s policies against the three key components of an incremental development policy. In addition, a discussion of each policy component follows the table. SEI, CMMI-ACQ, Version 1.3 (November 2010). component. ●=Fully met—the agency provided evidence that addressed the component. ◐=Partially met—the agency provided evidence that addressed about half or a large portion of the ○=Not met—the agency did not provide evidence that addressed the component or provided evidence that minimally addressed the component. Require delivery of functionality every 6 months. Only one of the five agencies—VA—fully addressed this policy component by clearly requiring that its projects be completed in increments that must not exceed 6 months. The other four agencies did not address this policy component. Three of these agencies—Defense, HHS, and DHS—all developed policies that promote the use of incremental development, but these policies do not require functionality to be delivered every 6 months. Specifically, with regard to Defense, although the department’s acquisition framework calls for investments to use incremental development, its policy on IT budget submissions encourages investments to deliver functionality every 12-18 months— not every 6 months. According to officials of the Defense Office of the CIO, 12-18 month incremental development is better aligned with the acquisition framework that many of its IT acquisitions have used. For HHS, although its policy requires incremental development, its policy recommends—but does not require—that all projects deliver functionality every 3-6 months. According to an HHS Office of the CIO official, HHS does not require its projects to deliver functionality every 6 months because it wants to provide projects with flexibility. For DHS, in June 2012, the former DHS CIO issued a draft policy encouraging IT projects to move towards Agile development approaches. Additionally, with respect to financial systems modernization programs, DHS’s policy calls for providing financial capabilities to the customer in small increments of 6-12 months. According to DHS officials representing the Office of the CIO, the department is currently developing a departmentwide policy on incremental development; however, they said that the draft currently encourages investments to deliver the first release 18 months after program initiation and thereafter deploy functionality in cycles no longer than 12 months, but ideally less than 6 months. Lastly, Transportation also did not address the component because, although Transportation has a policy that calls for projects to deliver functionality every 6 months, officials from the Office of the CIO explained that this policy does not apply to the Federal Aviation Administration (FAA). These officials explained that how often FAA projects deliver functionality depends on their size, scope, risk, visibility, and interdependencies with other programs. Define functionality. Only one of the five agencies—VA—fully addressed this policy component. VA has a policy that defines what it means to deliver functionality—both in terms of what constitutes an increment and what should be delivered at the end of an increment. For example, VA defines an increment as the segment of the project that produces, in a cycle of 6 months or less, a deliverable that can be used by customers in an operational environment. For the agency that partially addressed the component—Defense—although it has defined functionality for purposes of its acquisition framework, it has not defined the functionality that its IT budget submission policy encourages projects to deliver every 12-18 months. The department stated that it is working with OMB to define this term. Lastly, three agencies—HHS, DHS, and Transportation—had not defined functionality in terms of what they expected projects to deliver at the end of a development cycle. Officials representing these agencies’ respective Office of the CIO acknowledged that they have not defined functionality. These officials told us that they would update their policies to define the term, but officials from HHS and Transportation did not provide a time frame for doing so. DHS officials stated that, although they did not have a definitive timeframe, they hoped to finalize the policy in 2014. Until the agencies define this term, investments may create definitions that are inconsistent with the intent of OMB’s policy. Define a process for enforcing compliance. Only one of the five agencies—VA—fully addressed this policy component by defining processes for ensuring that increments are structured to deliver functionality every 6 months or less and for reviewing projects that fall behind schedule. In particular, VA’s policy requires the agency to hold a TechStat session when any increment delivery date has been or will be missed. Two agencies partially addressed this component— Defense and HHS—because, although they established processes for ensuring that IT is acquired incrementally, these processes do not (1) require enforcement of incremental development within the specific time frames consistent with OMB guidance (12-18 months for Defense and 3-6 months for HHS) or (2) include using TechStat processes to identify investments that are not being acquired incrementally. Finally, two agencies—DHS and Transportation—have not established processes for enforcing compliance with their incremental development policies. Officials from their respective Office of the CIO told us that they are updating their policies to address this issue. Transportation officials representing the Office of the CIO stated that it would update its policy later this year; DHS Office of the CIO officials stated that, although they did not have a definitive timeframe, they hoped to finalize their policy in 2014. Agencies cited several underlying reasons that contributed to these weaknesses: (1) they were not always aware of OMB guidance, (2) they did not believe that the guidance was realistic, and (3) they said the guidance was not always clear. Regarding agency awareness of the guidance, since the 2010 IT Reform Plan, OMB has communicated changes to incremental development requirements, such as the change from 12 to 6 months, through budget guidance. However, selected agency officials said they were not always aware of this guidance. For example, DHS Office of the CIO officials told us that did not know about OMB’s requirement to deliver functionality every 6 months. Additionally, Transportation officials representing the Office of the CIO were not aware that in 2012 OMB had changed its guidance from recommending to requiring that projects deliver in 6-month cycles. With respect to whether OMB’s guidance is realistic, officials from Defense, HHS, DHS, and Transportation explained that they do not want to require all of their projects to deliver functionality every 6 months because it may not be reasonable for all investments to do so. Defense, DHS, and Transportation officials said that delivering every 12 months, as advocated in OMB’s IT Reform Plan, is more reasonable.Defense officials from the Office of the CIO, 12-18 month incremental development is better aligned with the acquisition framework that many of its IT investments have used. DHS and Transportation Office of the CIO officials stated that, depending on program size, scope, complexity, budget, schedule, and expertise, it may be more reasonable to deliver functionality every 12 months. As discussed later in this report, we agree that OMB’s requirement to deliver functionality every 6 months is unrealistic. OMB staff members from the Office of E-Government and Information Technology noted that it will take time for agencies to embrace delivering functionality every 6 months because of the perceived risk of adopting such approaches. Those staff members explained that agencies, such as Defense, have been using the waterfall development method for many years, and they perceive less risk in continuing with that method than with changing to a method that produces functionality more rapidly. Lastly, two key components of OMB’s guidance are not clear. First, in revising Circular A-130 in 2000, OMB did not identify the minimum requirements of what the agencies’ policies are to include and did not specify when the policies are to be completed. Although OMB issued later guidance on incremental development, it has not yet specified what agencies’ incremental development policies are to include. Second, OMB’s guidance did not provide a complete definition of the functionality it expects to be delivered every 6 months. According to staff from the Office of E-Government, OMB intends for agencies to deliver functionality that can be tested by business users; nevertheless, they noted that they left this definition out of their guidance so that agencies could develop a definition that would be flexible enough to meet their needs. However, in the absence of further guidance from OMB, agencies may continue to not define this term or may create definitions that are inconsistent with the intent of OMB’s policy. For example, HHS officials from the Office of the CIO told us that the completion of requirements documents could meet OMB’s definition of delivering functionality. Additionally, an FAA Office of the CIO official explained that some investments have classified the delivery of requirements documentation as functionality. These two examples are not consistent with OMB’s intent since they do not deliver functionality that can be tested, but instead only plan to deliver project documentation. Until OMB explicitly issues realistic and clear guidance and Defense, HHS, DHS, and Transportation address the identified weaknesses in their incremental development policies, it will be difficult to deliver project functionality more rapidly, measure how often projects are delivering functionality, and enforce compliance with the delivery time frames called for in their policies. In its 2010 IT Reform Plan, OMB called for IT programs to deliver functionality at least every 12 months. Subsequently, OMB has made this requirement more stringent in that it now requires projects associated with major IT investments to deliver functionality every 6 months. The majority of the selected investments we reviewed did not plan to deliver functionality every 6 months. Specifically, only 23 of the selected 89 investments had one or more projects that, when taken collectively, planned to deliver functionality every 6 months. To VA’s credit, all six of the department’s selected investments planned to deliver functionality every 6 months. The other agencies varied in the extent to which they met the standards established by OMB’s guidance. Table 2 shows how many of the selected investments at each agency planned on delivering functionality every 6 months during fiscal years 2013 and 2014. The variety of life-cycle cost estimates for these investments shows that incremental development can be applied to a wide variety of investment scopes. Specifically, of the 23 investments that planned to deliver functionality in 6-month cycles, 9 had cost estimates that were less than $250 million, 4 had estimates between $250 and $575 million, 5 had estimates between $575 million and $2 billion, and 5 had estimates greater than $2 billion. Twenty-seven of the 89 investments in our review (30 percent) reported using an Agile development methodology for one or more of their projects. Of those 27 investments, 14 (52 percent) planned to deliver functionality every 6 months. The other 13 investments using Agile did not plan on delivering functionality as frequently as OMB guidance requires. We have previously found that Agile projects typically produce working functionality every 1 to 8 weeks; as such, it appears that these investments may not be properly implementing an Agile development methodology. Agency officials cited three types of investments for which it may not always be practical or necessary to expect functionality to be delivered in 6-month cycles: (1) investments in life-cycle phases other than acquisition (i.e., planning and budgeting, management in-use, and disposition); (2) investments intended to develop IT infrastructure; and (3) research and development investments. Life-cycle phases other than acquisition. Officials from Defense, HHS, DHS, and Transportation stated that it is not reasonable to expect investments to deliver functionality every 6 months when their investments’ projects are not in the acquisition phase. Specifically, 24 investments did not have projects in the acquisition stage. Of those 24 investments, 22 did not plan to deliver functionality in 6-month cycles (10 from Defense, 3 from HHS, 1 from DHS, and 8 from Transportation), and 2 investments did plan to do so (2 from HHS). For the 2 investments that planned to deliver functionality every 6 months, both had at least one project in the management in-use phase, meaning that at least one of the investments’ projects were beyond the planning and development stages and were being used to support agency operations. Infrastructure investments. Officials from Defense, DHS, and Transportation explained that not all infrastructure investments can be expected to deliver functionality every 6 months. Specifically, 21 investments provide infrastructure, such as IT security, office automation, and telecommunications. For example, officials representing two of the DHS investments explained that, prior to deploying functionality, they need to acquire real estate, conduct environmental assessments, and perform construction work, such as digging trenches, burying cables, and building facilities. Of the 21 investments, 20 did not plan to deliver functionality every 6 months (17 from Defense, 1 from HHS, and 2 from DHS); however, 1 investment did plan to do so (1 from Defense). Research and development investments. Officials from FAA’s Office of the CIO explained that FAA’s research and development investments are not intended to deliver functionality. Those officials stated that, before the agency approves a technology for further development, it performs research and development to ensure that the technology meets safety standards. If those standards are met, FAA creates a new investment aimed at deploying that technology. Consistent with this, none of FAA’s six research and development investments planned to deliver any functionality during fiscal years 2013 and 2014. These concerns have merit. For example, with respect to investments in life-cycle phases other than acquisition, at the outset of a new investment, an agency may need more than 6 months to, among other things, define high-level requirements and find a contractor to help develop the system. In addition, it may not be necessary for an investment to continue delivering new functionality when all planned functionality has been fully deployed. Regarding infrastructure investments, it may not be practical or cost- effective for an investment to refresh fully functioning hardware every 6 months. Additionally, it may not be feasible for an agency to build a physical facility (e.g., data center) within 6 months. Further, for research and development investments, industry practices and our work on best practices support FAA’s efforts to thoroughly validate the feasibility and cost-effectiveness of new technology prior to making significant investments. Although OMB requires all investments to deliver functionality every 6 months, an OMB staff member from the Office of E-Government explained that not all investments will be able to meet this goal. Instead, that staff member said that about half of the federal government’s major IT investments will deliver functionality in 6 months or less, and the other half will have longer development cycles. However, OMB’s guidance does not make this distinction. As a result, agencies may be confused about whether OMB’s incremental development guidance applies to all investments. If these three types of investments, which account for 40 of the selected 29 of the remaining 49 investments 89 investments, are not considered,did not plan to deliver functionality in 6-month cycles. Table 3 shows, after removing the three types of investments discussed above, how many of the selected investments at each agency planned to deliver functionality every 6 months during fiscal years 2013 and 2014. Considering agencies’ concerns about delivering functionality every 6 months for the three types of investments discussed above and OMB’s own expectations that many investments will not meet this goal, it is unclear whether this is the most appropriate governmentwide goal, and it raises the question of whether OMB should consider a longer time frame, such as 12 months, as called for in OMB’s IT Reform Plan. However, even using the time frame of 12 months as the target, less than half of the selected investments planned to deliver functionality in 12-month cycles. Specifically, 41 of the 89 selected investments planned to deliver functionality every 12 months during fiscal years 2013 and 2014 and 48 did not. Most notably, the preponderance of Defense and Transportation investments (70 percent for Defense and 65 percent for Transportation) did not plan to deliver functionality every 12 months. Table 4 shows how many of the selected investments at each agency planned on delivering functionality every 12 months during fiscal years 2013 and 2014. The previously discussed weaknesses in agency policies have permitted the inconsistent implementation of incremental development approaches. OMB staff members from the Office of E-Government acknowledged that inconsistent implementation of OMB’s guidance can be at least partially attributed to challenges in ensuring that agencies develop consistent definitions of and approaches to incremental development. Although OMB has led the government’s recent effort to improve incremental development, it has not completed a key commitment aimed at improving incremental development—namely, OMB has rarely used the TechStat process to turn around or cancel investments that are not using incremental development. As previously mentioned, TechStat sessions are intended to terminate or turn around IT investments that are failing or are not producing results. Additionally, many failed IT investments have not used incremental development approaches. Therefore, OMB could use TechStat sessions as a powerful tool to turn around investments that are not being acquired incrementally. However, OMB staff members from the Office of E-Government said that OMB has only held one such TechStat. OMB staff from the Office of E-Government explained that, in order to select investments in need of a TechStat session, agency exhibit 300 data are reviewed for evidence of poor performance. However, the usefulness of the exhibit 300 data for the purpose of identifying whether investments are using incremental approaches is limited. For example, OMB does not require agencies to explicitly identify whether their investments are using incremental approaches. Additionally, of the 89 selected investments, 34 had activities in their exhibit 300 submissions that were inaccurately classified as delivering functionality (9 from Defense, 8 from HHS, 5 from DHS, 9 from Transportation, and 3 from VA). For example, one Defense investment indicated that awarding a contract constituted a delivery of functionality. OMB staff from the Office of E-Government acknowledged that the exhibit 300 data are not as helpful as possible in addressing incremental development. Consequently, for its TechStat reviews, OMB’s insight into investments’ use of incremental development approaches is limited. Officials from Defense, HHS, DHS, and Transportation attributed the problem to a lack of guidance from OMB on what is to be delivered every 6 months. Nevertheless, officials from Defense, DHS, and VA stated that they would properly classify activities in future exhibit 300 submissions.without additional guidance from OMB, agencies may continue to improperly classify activities. Additionally, as previously mentioned, four of the five selected agencies—Defense, HHS, DHS, and Transportation— have not updated their TechStat policies to include identifying investments that are not being acquired incrementally. Without better implementation of incremental development approaches and identification of investments using incremental development, IT expenditures will likely continue to produce disappointing results— including large cost overruns, long schedule delays, and questionable mission-related achievements. Further, without useful information, including whether investments are following an incremental approach, on projects associated with major IT investments, OMB does not have the necessary information to oversee the extent to which projects and investments are implementing its guidance. Multiple factors were identified by the five agencies as enabling and inhibiting incremental development during a 6-month period. Specifically, eight factors were identified by three or more of the five agencies in our review as enabling incremental development of IT systems, and seven factors were identified by three or more agencies as inhibiting incremental development. The enabling factor identified by all of five of the agencies was active engagement of program officials with stakeholders. The inhibiting factors identified by all five agencies were (1) the lack of sufficient, timely funding; (2) program characteristics that made rapid incremental development infeasible; and (3) the lack of stable, prioritized requirements. Eight factors were identified by three or more of the five agencies in our review as contributing to the successful development of functionality in 6- month cycles. All five of the agencies in our review cited active program engagement with stakeholders. Table 5 shows the distribution of the eight factors among the agencies, and examples of how the agencies implemented them are discussed following the table. Program officials actively engaged with stakeholders. Officials from all five agencies explained that active engagement with program stakeholders—individuals or groups with an interest in the success of the investment—was a factor that enabled the development of functionality in 6-month cycles. For example, officials from one of the HHS investments that we reviewed stated that having strong communication between the business program, the IT program, and contractors enabled projects to move forward on schedule in a cohesive manner with clear goals and objectives. Programs used an Agile development methodology. Officials from four of the five agencies indicated that the use of an Agile development methodology helped them to deliver functionality every 6 months. For example, Defense officials explained that the use of Agile development processes allowed software to be broken into smaller releases that were easily achieved, tested, and fielded. Additionally, VA officials stated that the agency has embraced Agile development, which has helped the department deliver functionality more quickly. Those officials also noted that merely forcing investments to use 6- month waterfall iterations would not have resulted in the same success. Programs successfully prioritized, managed, and tested requirements. Officials from four of the five agencies identified implementation of requirements management practices—including requirements prioritization, management, and testing—as a factor that enabled the agencies to deliver functionality every 6 months. For example, Transportation officials explained that the primary factor that enabled incremental development has been obtaining clear requirements. Further, HHS officials stated that the use of a prioritized product backlogrequirements should be allocated to future releases. has helped teams make decisions about which Staff had the necessary skills and experience. Officials from four of the five agencies stated that consistent and stable program staff with experience and expertise allowed them to deliver functionality frequently. For example, DHS officials representing one of the selected investments indicated that having skilled program managers has a large impact on the success of a program. In addition, VA officials reported that their ability to retain key personnel with both functional and technical expertise has enabled them to deliver high- quality functionality rapidly. Programs successfully implemented cost and schedule estimating best practices. Officials from four of the five agencies cited the implementation of cost and schedule estimating practices as enabling the frequent delivery of functionality. For example, Defense officials told us that development of realistic cost estimates and comprehensive program schedules helped programs to deliver functionality while meeting established performance goals. Further, Transportation officials stated that improved guidance and training on cost estimating and scheduling helped them to deliver functionality in 6-month cycles. Officials used various contracting strategies to increase flexibility and improve contractor oversight. Officials from four of the five agencies cited the use of specific contracting strategies. For example, DHS officials indicated that they worked with contractors to modify their contracts from the traditional waterfall approach to a structure that allows for Agile development. In addition, officials from VA stated that the use of performance-based acquisitions assisted them in monitoring contractor progress towards achieving actual results against planned objectives. Staff used key technologies to accelerate development work. Officials from three of the five agencies indicated that the use of key technologies enabled them to deliver functionality more quickly. For example, HHS officials explained that having an established cloud environment has enabled them to reduce deployment time for releases, while also providing the needed flexibility to meet their customers’ changing needs. Additionally, DHS officials explained that projects, especially those using an Agile development methodology, have saved time using automated testing tools. Programs successfully implemented risk management best practices. Officials from three agencies explained that the successful implementation of risk management practices helped them to deliver functionality more rapidly. For example, officials from one of the selected VA investments told us that the organization’s risk management process allowed teams to quickly escalate risks to senior leadership so that they could be managed before their impacts were fully realized. Many of the factors that the five agencies cited are consistent with our 2011 work on factors critical to successful IT acquisitions and our 2012 work on effective practices in implementing Agile software development.In particular, our work in both areas discussed the importance of active engagement with program stakeholders, such as actively engaging with stakeholders to obtain customer feedback, and effectively managing requirements. Additionally, the eight commonly identified factors that enable the delivery of functionality in 6 months are consistent with OMB’s IT Reform Plan. In particular, as previously mentioned, one high-level objective of the plan—effectively managing large-scale IT programs—aims to improve areas that impact the success rates of large IT programs by, among other things, enabling programs to use incremental development approaches. As part of this high-level objective, the plan addresses the importance of actively engaging with stakeholders, ensuring that program management professionals have proper skills and experience, and developing new contracting strategies that more effectively support incremental development. Seven factors were identified by three or more of the five agencies as inhibiting the development of IT functionality every 6 months. The factors most commonly cited include (1) programs did not receive sufficient funding or receiving funding later than needed, (2) program characteristics made rapid delivery of functionality infeasible or impractical, (3) programs did not have stable and prioritized requirements. Table 6 shows the distribution of the seven factors among the agencies, and examples of how the factors impacted the agencies are discussed following the table. Programs did not receive sufficient funding or received funding later than needed. Officials from all five departments cited insufficient funding, such as reductions caused by the fiscal year 2013 sequester, or receiving funding later than needed because of continuing resolutions. For example, Defense officials representing one of the investments we reviewed stated that furloughs brought on by the 2013 sequester significantly impacted program schedules, both in terms of lost work days and the inability to coordinate integration between software providers, resulting in overall inefficiencies. In addition, several FAA officials explained that the delivery of planned functionality was adversely affected by the uncertainty brought about by the 2013 sequester, and that future funding instability has impacted the agency’s ability to plan when functionality is to be delivered. Program characteristics made rapid delivery of functionality infeasible or impractical. Officials from all five agencies indicated that some of their programs have certain characteristics—such as the deployment of new physical infrastructure, human health and safety concerns, or external dependencies—that make the delivery of functionality in 6- month time frames infeasible or impractical. For example, DHS officials explained that their infrastructure projects cannot deliver functionality until all key activities (e.g., land acquisition, environmental assessments, site preparation, and construction) have been completed and the new infrastructure has been fully deployed, tested, and accepted. Additionally, Transportation officials reported that air traffic control systems require years of development and testing in order to ensure that the systems do not compromise airspace security. Programs did not have stable, prioritized requirements. Officials from all five agencies stated that not having complete requirements at the beginning of their investments’ development or having changes made to requirements and their relative priorities during development was a factor that negatively affected their programs’ delivery of incremental functionality in 6-month periods. For example, HHS officials representing one of the selected investments explained that that final rules detailing eligibility for a medical program were not completed until late in the development cycle; consequently, the program could not define the corresponding business requirements in time to meet the development schedule. Further, VA officials stated that schedules are disrupted when stakeholders identify new program requirements that must be delivered before others. Development work was slowed by inefficient governance and oversight processes. Officials from three of the five agencies cited inefficient governance and oversight processes. For example, DHS officials representing the Office of the CIO stated that the current DHS governance model was not conducive to frequent delivery of functionality. To illustrate, those officials noted that it can take up to 2 months to schedule a meeting with DHS review boards prior to releasing functionality. However, a DHS official from Program Accountability and Risk Management disagreed with this statement, explaining that DHS’s acquisition review boards perform reviews very quickly, and that any delays in completing these reviews are attributable to investments being unprepared. Further, DHS Office of the CIO officials suggested that governance over programs using an Agile development methodology should be performed at the lowest practicable level of the organization. In addition, VA officials explained that, although the periodic approvals required of various department groups are useful, these approvals can slow the testing and release processes and could use further streamlining. Development schedules were impeded by procurement delays. Officials from three of the five agencies explained that procurement delays—such as delays in getting contracts awarded or approving contract modifications—contributed to difficulties in delivering incremental functionality. For example, DHS officials explained the process of planning for an acquisition, developing solicitations, selecting contractors, and, in some cases, addressing protests, are not conducive to delivering functionality in 6-month cycles. Program staff were overutilized or lacked the necessary skills and experience. Officials from three of the five agencies indicated that they did not have enough program staff with the expertise and experience necessary to deliver functionality every 6 months. For example, officials representing one of the HHS investments stated that the loss of key personnel and the lack of staff with knowledge and skill in key disciplines, such as incremental development, have negatively impacted the agency’s ability to deliver functionality. Incremental development was impeded by select technologies. Officials from three of the five agencies in our review explained that the software and tools they selected for their programs either introduced delays in development or were ultimately not usable by the program. Transportation officials representing one of the selected investments reported that their chosen technology added a new level of complexity to storage and data processing that required all development and testing to be completed before the implementation could occur. As a result, those officials told us that all functionality had to be deployed in one release, instead of being spread out in stages. Many of the factors that the five agencies identified as inhibiting delivery of functionality in 6-month increments were consistent with our work on the challenges we identified in the application of Agile software development methods.deploying software developed using an Agile development method is that traditional governance and oversight activities were difficult to execute For example, we noted that one challenge in within an iterative time frame. In particular, one agency official stated that completing the necessary reviews within the short, fixed time frame of an Agile iteration was difficult because reviewers followed a slower waterfall schedule with reviews that could take months to perform after the completion of an iteration. This caused delays for iterations that needed such reviews within the few weeks of the iteration. We also noted that procurement practices sometimes delay Agile development. Further, OMB’s IT Reform Plan also identified some of the same problems that inhibit incremental development and described solutions to In particular, as previously mentioned, one item of the address them.plan was to improve funding of programs by working with Congress to create IT budget models that promote IT budget flexibility (such as multiyear budgets or revolving funds). In addition, the plan addresses the importance of streamlining IT governance by strengthening the quality and timing of oversight activities. The factors identified in this report as enabling and inhibiting incremental development could help agencies address the challenges they face in incrementally acquiring IT. Given the enormous size of the federal government’s investment in IT and the often disappointing results from IT development efforts—many of which are attributable to the use of a “big bang” approach—finding ways to improve the quality and timeliness of agencies’ investments is important. Congress, OMB, and our work support the use of incremental development practices. However, although the selected agencies have developed policies that address incremental development, most of the agencies’ policies have significant weaknesses. With the exception of VA, these policies do not fully define functionality, and do not have a complete process for ensuring that the agencies’ investments are developed incrementally, including the use of TechStat sessions to enforce compliance with incremental development policies. In the absence of such policies, agencies continue to run the risk of failing to deliver major investments in a cost-effective and efficient manner. Regarding implementation of incremental development, slightly more than one-fourth of selected investments planned to deliver functionality every 6 months—and less than one-half planned to do so every 12 months. Thus, delivering functionality every 6 months is not an appropriate requirement for all agencies given current performance. Requiring the delivery of functionality every 12 months, consistent with OMB’s IT Reform Plan, would be an appropriate starting point and be a substantial improvement. Further, since there are three types of investments for which it may not always be practical or necessary to expect functionality to be delivered in 6-month cycles, this raises questions about whether OMB’s requirement for shorter delivery cycles should be applied to all investments or whether investments should be allowed the latitude to determine a more appropriate time frame. The lack of progress in updating policies and implementing OMB’s guidance was enabled by weaknesses in OMB’s guidance. In the absence of agency and OMB use of TechStat sessions to ensure compliance with incremental development policy, investments will continue to be at risk of not delivering promised capabilities on time and within budget. Until OMB clearly and explicitly disseminates guidance with realistic goals and clear expectations, and agencies update their policies to reflect this guidance, agencies may not consistently adopt incremental development approaches, and IT expenditures will continue to produce disappointing results—including sizable cost overruns and schedule slippages, and questionable progress in meeting mission goals and outcomes. Additionally, without useful information on how often projects are delivering functionality, it will be difficult for OMB to identify investments that are not implementing its guidance. Further, dissemination of the factors identified in this report as enabling and inhibiting incremental development may help federal agencies address the challenges they face in acquiring IT investments faster and more efficiently. We recommend that the Director of the Office of Management and Budget direct the Federal Chief Information Officer to take the following two actions. Update, and clearly and explicitly issue incremental development guidance that addresses the following three components: requires projects associated with major IT investments to deliver incremental functionality at least every 12 months, with the exception of the three types of investments identified in this report; specifies how agencies are to define the project functionality that is to be delivered; and requires agencies to define a process for enforcing compliance with incremental functionality delivery, such as the use of TechStat sessions. Require agencies to clearly identify on exhibit 300 submissions whether, for each project, functionality will be delivered within the time frames called for by this incremental development guidance, and to provide justification for projects that do not plan to do so. We further recommend that the Secretaries of Defense, Health and Human Services, Homeland Security, and Transportation take the following two actions: modify, finalize, and implement their agencies’ policies governing incremental development to ensure that those policies comply with OMB’s guidance, once that guidance is made available; and when updating their policies, consider the factors identified in this report as enabling and inhibiting incremental development. We also recommend that the Secretary of Veterans Affairs consider incorporating the factors identified in this report as enabling and inhibiting incremental development in the department’s related policy. We received comments on a draft of this report from OMB and the five agencies in our review. OMB agreed with one recommendation and partially disagreed with the other; Defense generally concurred with the report; HHS neither agreed nor disagreed with the report’s recommendations; DHS agreed with our recommendations; Transportation did not agree with the recommendations in that it did not believe the department should be dependent on OMB first taking action; and VA generally agreed with the report’s conclusions and concurred with our recommendation. Each agency’s comments that we received are discussed in more detail below. In comments provided via e-mail on April 15, 2014, staff in OMB’s Office of General Counsel, on behalf of OMB, stated that the agency agreed with one of our recommendations and partially disagreed with the other. Specifically, OMB agreed with our recommendation to require agencies to clearly identify on exhibit 300 submissions whether functionality will be delivered within the time frames called for by this incremental development guidance. OMB stated that it agreed with our recommendation to update and issue incremental development guidance, but did not agree that this guidance should require major IT investments to deliver incremental functionality at least every 12 months. OMB explained that changing the requirement from 6 to 12 months would reduce the emphasis on incremental development that it has been advocating. OMB also noted that it believes requiring investments to deliver functionality every 6 months is an appropriate governmentwide goal and said that updating and clarifying its guidance on incremental development will make it easier for agencies to meet this target. However, as we state in this report, slightly more than one-fourth of selected investments planned to deliver functionality every 6 months—and less than one- half planned to do so every 12 months. Additionally, there are three types of investments for which it may not always be practical or necessary to expect functionality to be delivered in 6-month cycles. Thus, we continue to believe that delivering functionality every 6 months is not an appropriate requirement for all agencies and that requiring the delivery of functionality every 12 months, consistent with OMB’s IT Reform Plan, is a more appropriate starting point. We therefore maintain that OMB should require projects associated with major IT investments to deliver functionality at least every 12 months. In written comments, Defense stated that it generally concurred with the report and outlined planned actions to address the recommendations. However, Defense explained that many of its IT investments are consistent with the three types of investments for which it may not always be practical or necessary to expect functionality to be delivered in 6-month cycles or have other exceptional circumstances. Given these issues, Defense stated that requiring the delivery of functionality every 12 months is not a useful management constraint. However, as we state in our report, many failed projects have been broadly scoped in that they aim to deliver their capabilities several years after initiation. Additionally, the Defense Science Board reported that Defense’s acquisition process for IT systems was too long, was ineffective, and did not accommodate the rapid evolution of IT. In order to resolve these issues, we continue to believe that investments and their projects should deliver functionality more frequently. In addition, the existence of many investments that are consistent with the three types of investments identified in this report for which it may not be practical to deliver functionality quickly does not excuse Defense from the requirement of delivering functionality every 12 months. Consistent with our recommendation to OMB, investments and projects that meet these three exceptions should be exempt from the requirement, but all other investments and projects should aim to meet this requirement. Further, we expect that Defense’s actions to implement our recommendation to consider the factors identified in this report that enable and inhibit incremental development will help its investments and projects deliver functionality more rapidly. As such, we continue to believe that requiring the delivery of functionality every 12 months is an appropriate goal for all federal agencies, including Defense. Defense’s comments are reprinted in appendix III. The department also provided technical comments, which we have incorporated in the report as appropriate. In comments provided via e-mail on April 15, 2014, an official from HHS’s Office of the Assistant Secretary for Legislation, on behalf of HHS, stated that the department had no comments. In written comments, DHS stated that it concurred with our recommendations and outlined planned actions to address the recommendations. DHS’s comments are reprinted in appendix IV. The department also provided technical comments, which we have incorporated in the report as appropriate. In comments provided via e-mail on April 15, 2014, Transportation’s Deputy Director of Audit Relations, on behalf of Transportation, stated that the department would prefer to have specific recommendations and deliverables so that it can achieve success in closing them. Specifically, the department explained that relying on another agency to concur with one of our recommendations before Transportation can take action leaves the department with the potential challenge of a recommendation that cannot be implemented. However, as previously stated, OMB agrees with our recommendation to update and issue incremental guidance, meaning that OMB has committed to taking the actions necessary to enable Transportation to begin addressing our recommendation. Additionally, our recommendation to consider the factors identified in this report as enabling and inhibiting incremental development when updating incremental development policies does not require another agency to take action before Transportation can implement it. Accordingly, we continue to believe that our recommendations are warranted and can be implemented. In written comments, VA stated that it generally agreed with the report’s conclusions, that it concurred with our recommendation to the department, and that it will review its existing policy for incremental development and consider incorporating the factors identified as enabling and inhibiting incremental development. VA’s comments are reprinted in appendix V. The department also provided technical comments, which we have incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Director of the Office of Management and Budget; and the Secretaries of the Departments of Defense, Health and Human Services, Homeland Security, Transportation, and Veterans Affairs. In addition, the report will also be available at no charge on our website at http://www.gao.gov/. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives for this review were to (1) assess whether selected agencies have established policies for incremental information technology (IT) development, (2) determine whether selected agencies are using incremental development approaches to manage their IT investments, and (3) identify the key factors that enabled and inhibited the selected agencies’ abilities to effectively use incremental development approaches to manage their IT investments. In conducting our review, we selected five agencies and a total of 89 investments from those agencies. To choose the agencies, we identified the five agencies with the largest IT budgets for development, modernization, and enhancement on major IT investments in fiscal years 2013 and 2014 as reported in the Office of Management and Budget’s (OMB) fiscal year 2014 exhibit 53. Those agencies are the Departments of Defense (Defense), Health and Human Services (HHS), Homeland Security (DHS), Transportation (Transportation), and Veterans Affairs (VA). To choose the agencies’ investments, we identified the 98 major IT investments for which the selected agencies planned to spend more than 50 percent of the investments’ fiscal year 2013 and 2014 budgets on development, modernization, and enhancement as reported in OMB’s fiscal year 2014 exhibit 53. We removed 9 investments because, after reporting their planned budgets for OMB’s fiscal year 2014 exhibit 53, these investments changed their budgets so that they no longer planned to spend more than 50 percent of the investments’ fiscal year 2013 and The 2014 budgets on development, modernization, and enhancement.final 89 investments are identified in appendix II. The investments selected for the review account for about 58 percent of the development, modernization, and enhancement spending on all federal agencies’ major IT investments for fiscal years 2013 and 2014 reported in OMB’s exhibit 53 for fiscal year 2014. To address our first objective, we reviewed OMB guidance related to the use of incremental development, as well as industry guidance, and identified three key components of incremental development that agencies should include in their policies. In order to identify these components, we reviewed OMB’s guidance and leading industry guidance on institutionalizing processes throughout an organization.analysis identified three key policy components that will help agencies effectively implement OMB’s requirement for incremental development. Require delivery of functionality every 6 months. According to the Software Engineering Institute’s (SEI) Capability Maturity Model® Integration for Acquisition (CMMI-ACQ), as part of institutionalizing a process, organizations should document their processes, to include defining standards and requirements. According to OMB budget guidance, projects associated with major IT investments must deliver functionality every 6 months. Define functionality. As previously stated, according to SEI’s CMMI- ACQ, as part of institutionalizing a process, organizations should document that process, to include defining standards and requirements. Define a process for enforcing compliance. According to SEI’s CMMI- ACQ, as part of institutionalizing a process, organizations should document that process, including management review activities (e.g., taking corrective action when requirements and objectives are not being satisfied). Additionally, OMB Circular A-130 requires agencies to ensure that investments comply with their policies. Further, according to the President’s Fiscal Year 2014 Budget, agencies are to use their TechStat processes to identify investments that are not being acquired incrementally and undertake corrective actions. At each selected agency, we then analyzed agency policies for incremental development and compared these policies to the three components identified above. For each agency, each policy component was assessed as either being not met—the agency did not provide evidence that addressed the component or provided evidence that minimally addressed the component; partially met—the agency provided evidence that addressed about half or a large portion of the component; or fully met—the agency provided evidence that addressed the component. We also interviewed officials from OMB and the five selected agencies to obtain information about their current and future incremental development policies. To address our second objective, we administered a data collection instrument to each of the selected investments about how often each investment planned to deliver functionality during fiscal years 2013 and 2014. We then analyzed information obtained from data collection instruments describing how often the selected investments planned to deliver functionality. We prepopulated these instruments with data obtained from OMB’s fiscal year 2014 exhibit 53, as well as each investment’s exhibit 300 data, which describe the investments’ projects and how often each project plans to deliver functionality. We asked officials from each investment to verify the accuracy and completeness of the data and to make corrections where needed. Because the exhibit 300 data did not always describe the investments’ plans for delivering functionality in fiscal year 2014, we asked the officials to indicate for each project whether they planned to deliver functionality in the first half of fiscal year 2014 (i.e., from October 1, 2013, to March 31, 2014) and in the second half of fiscal year 2014 (i.e., from April 1, 2014, to September 30, 2014). We also asked the investments to provide their life-cycle cost estimates and, for each project, the development methodology used and phase of the acquisition life cycle. Using the information obtained through the data collection instruments, we determined the extent to which the selected investments and their projects planned to meet OMB’s guidance on incremental development. To assess whether investments had planned to deliver functionality every 6 months, we determined whether the selected investments planned to deliver functionality in each of the following four time frames: (1) the first half of fiscal year 2013 (i.e., from October 1, 2012, to March 31, 2013), (2) the second half of fiscal year 2013 (i.e., from April 1, 2013, to September 30, 2013), (3) the first half of fiscal year 2014, and (4) the second half of fiscal year 2014. To determine whether investments had planned to deliver functionality every 12 months, we analyzed whether the selected investments planned to deliver functionality in each of the following two time frames: (1) fiscal year 2013 and (2) fiscal year 2014. We presented our results to the five selected agencies and OMB and solicited their input and explanations for the results. To determine the reliability of the exhibit 300 data, we performed three steps. First, as previously mentioned, we asked officials from each investment to verify the accuracy and completeness of these data and provide the correct information where needed. Second, we removed projects that were not intended to deliver functionality and activities that were inaccurately classified as resulting in the delivery of functionality. To do so, we compared the descriptions of the projects and activities with a definition of functionality that we developed. To develop this definition, we reviewed OMB and agency guidance, as well as leading practices.We defined functionality as follows: the implementation of IT requirements that is intended to either (1) ultimately be used by one or more customers in production (actual deployment may occur at a later date than when the functionality is reported as being delivered) or (2) be a delivery of a prototype or pilot. We then assessed the descriptions of the activities and projects agencies reported in their exhibit 300 submissions against our definition. We presented the activities and projects that did not meet our definition to the selected agencies and solicited their input and explanations. Third, where there was a conflict between the exhibit 300 data and agencies’ answers to our questions regarding plans for delivering functionality in fiscal year 2014, we presented the conflict to the agencies and obtained clarification. We determined that the data were sufficiently reliable for the purpose of this report, which is to determine the extent to which the selected investments planned to deliver functionality every 6 and 12 months, during fiscal years 2013 and 2014. GAO, Information Technology: Critical Factors Underlying Successful Major Acquisitions, GAO-12-7 (Washington, D.C.: Oct. 21, 2011). methods. Additionally, we compared the factors to OMB’s 25 Point Implementation Plan to Reform Federal Information Technology Management. Further, because Defense guidance encourages investments to deliver functionality every 12-18 months where possible, we asked the selected investments and officials from Defense’s Office of the CIO to identify the key factors that have both enabled and inhibited their efforts to deliver functionality for their major IT investments every 12-18 months. We compared the information we received to the eight factors the five selected agencies commonly identified as enabling incremental development during a 6-month period and the seven factors commonly identified by the five agencies as inhibiting incremental development during a 6-month period. We also performed a content analysis of the information we received in order to identify additional factors. We conducted this performance audit from May 2013 to May 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Below is the list of investments that are included in this review, as well as whether each planned to deliver functionality every 6 and 12 months for fiscal years 2013 and 2014. In addition to the contact name above, individuals making contributions to this report included Dave Hinchman (Assistant Director), Deborah A. Davis (Assistant Director), Rebecca Eyler, Kaelin Kuhn, Jamelyn Payan, Meredith Raymond, Kevin Smith, Andrew Stavisky, and Kevin Walsh. | Federal agencies plan to spend at least $82 billion on IT in fiscal year 2014. However, prior IT expenditures have often produced disappointing results. Thus, OMB has called for agencies to deliver investments in smaller parts or increments. In 2010, it called for IT investments to deliver capabilities every 12 months and now requires investments to deliver capabilities every 6 months. GAO was asked to review agencies' incremental development approaches. Among other things, this report (1) assesses whether selected agencies have established policies for incremental IT development; and (2) determines whether selected agencies are using incremental development approaches to manage their IT investments. To do so, GAO selected five agencies—Defense, HHS, DHS, Transportation, and VA—and 89 total investments at these agencies. GAO then reviewed the agencies' incremental development policies and plans. All five agencies in GAO's review—the Departments of Defense (Defense), Health and Human Services (HHS), Homeland Security (DHS), Transportation (Transportation), and Veterans Affairs (VA)—have established policies that address incremental development; however, the policies usually did not fully address three key components for implementing the Office of Management and Budget's (OMB) guidance (see table). Specifically, only VA fully addressed the three components. Among other things, agencies cited the following reasons that contributed to these weaknesses: (1) the guidance was not feasible because not all types of investments should deliver functionality in 6 months, and (2) the guidance did not identify what agencies' policies are to include or time frames for completion. GAO agrees these concerns have merit. Until OMB issues realistic and clear guidance and agencies address the weaknesses in their incremental development policies, it will be difficult to deliver project capability more rapidly. Key =Fully met =Partially met =Not met Source: GAO analysis of agency documentation. The weaknesses in agency policies have enabled inconsistent implementation of incremental development approaches: almost three-quarters of the selected investments did not plan to deliver functionality every 6 months, and less than half planned to deliver functionality in 12-month cycles (see table). Without consistent use of incremental development approaches, information technology (IT) expenditures are more likely to continue producing disappointing results. Source: GAO analysis of agency data. Among other things, GAO recommends that OMB develop and issue realistic and clear guidance on incremental development and that the selected agencies update and implement their incremental development policies to reflect OMB's guidance. OMB partially disagreed, believing its guidance is realistic. Four agencies generally agreed with the report or had no comments, and one agency did not agree that its recommendations should be dependent on OMB first taking action. GAO continues to believe that its recommendations are valid, as discussed in this report. |
In 1968, in recognition of the increasing amount of flood damage, the lack of readily available insurance for property owners, and the cost to the taxpayer for flood-related disaster relief, the Congress enacted the National Flood Insurance Act (P.L. 90-448) that created the National Flood Insurance Program. Since its inception, the program has sought to minimize flood-related property losses by making flood insurance available on reasonable terms and encouraging its purchase by people who need flood insurance protection—particularly those living in flood- prone areas known as special flood hazard areas. The program identifies flood-prone areas in the country, makes flood insurance available to property owners in communities that participate in the program, and encourages floodplain management efforts to mitigate flood hazards. The program has paid about $12 billion in insurance claims, primarily from policyholder premiums that otherwise would, to some extent, have increased taxpayer-funded disaster relief. Under the program, flood insurance rate maps (FIRM) have been prepared to identify special flood hazard areas—also known as 100-year floodplains—that have a 1-percent or greater chance of experiencing flooding in any given year. For a community to participate in the program, any structures built within a special flood hazard area after the FIRM was completed must be built according to the program’s building standards that are aimed at minimizing flood losses. A key component of the program’s building standards that must be followed by participating communities is a requirement that the lowest floor of the structure be elevated to or above the base flood level—the highest elevation at which there is a 1-percent chance of flooding in a given year. The administration has estimated that the program’s standards for new construction are saving about $1 billion annually in flood damage avoided. When the program was created, the purchase of flood insurance was voluntary. To increase the impact of the program, however, the Congress amended the original law in 1973 and again in 1994 to require the purchase of flood insurance in certain circumstances. Flood insurance was required for structures in special flood hazard areas of communities participating in the program if (1) any federal loans or grants were used to acquire or build the structures or (2) the structures are secured by mortgage loans made by lending institutions that are regulated by the federal government. Owners of properties with no mortgages or properties with mortgages held by unregulated lenders were not, and still are not, required to purchase flood insurance, even if the properties are in special flood hazard areas. The National Flood Insurance Reform Act of 1994 that amended the program also reinforced the objective of using insurance as the preferred mechanism for disaster assistance. The act expanded the role of federal agency lenders and regulators in enforcing the mandatory flood insurance purchase requirements. It prohibited further flood disaster assistance for any property where flood insurance was not maintained even though it was mandated as a condition for receiving prior disaster assistance. Regarding the prohibition on further flood disaster assistance, the act prohibits borrowers who have received certain disaster assistance, and then failed to obtain flood insurance coverage, from receiving future disaster aid. FEMA’s Federal Insurance and Mitigation Administration has been responsible for managing the flood insurance program. However, the Homeland Security Act of 2002 transferred this responsibility to the Department of Homeland Security (DHS). As part of the largest reorganization of the federal government in over 50 years, the legislation combined about 170,000 federal employees, 22 agencies, and various missions—some that have not traditionally been considered security related—into the new department. FEMA’s responsibilities, including the flood insurance program, were placed in their entirety into DHS, effective March 1, 2003. Responsibility for the flood insurance program now resides in DHS’s Emergency Preparedness and Response Directorate. Historically, federal government programs, including the National Flood Insurance Program, report income and expenditures on a cash basis— income is recorded when received and expenditures are recorded when paid. Over the years, the annual reporting of the program’s premium revenues and its claims losses and expenses has shown wide fluctuations in cash-based operating net income or losses. For example, for fiscal year 2002, the program had a net income of $755 million, but in the previous year it had a net loss of $518 million. For the life of the program, the program has shown a net loss of $531 million. The program has, on numerous occasions, borrowed from the U.S. Treasury to fund claims losses. This “cash-based” budgeting, although useful for many government programs, may present misleading financial information on the flood insurance program. In 1997 and again in 1998, we reported that cash- based budgeting has shortcomings for federal insurance programs. Specifically, its focus on single period cash flows can obscure the program’s cost to the government and thus may (1) distort the information presented to policymakers, (2) skew the recognition of the program’s economic impact, and (3) cause fluctuations in the deficit unrelated to long-term fiscal balance. The focus on annual cash flows—the amounts of funds into and out of a program during a fiscal year—may not reflect the government’s cost because the time between the extension of the insurance, the receipt of premiums, the occurrence of an insured event, and the payment of claims may extend over several fiscal years. For the flood insurance program, cash-based budgeting may not provide the information necessary to signal emerging problems, make adequate cost comparisons, or control costs. For example, under its current practices, the program provides subsidized policies without explicitly recognizing the potential cost to the government. Under current policy, the Congress has authorized subsidies to be provided to a significant portion of the total policies in force, without providing annual appropriations to cover the potential cost of these subsidies. The program, as designed, does not charge a premium sufficient to cover its multiyear risk exposure. As a result, not only is the program actuarially unsound, but also the size of the shortfall is unknown. This is a concern that the administration has recognized and identified as a financial challenge to the flood insurance program. The use of accrual-based budgeting for the flood insurance program has the potential to overcome a number of the deficiencies in cash-based budgeting. Accrual-based budgeting (1) recognizes transactions or events when they occur, regardless of cash flows; (2) matches revenues and expenses whenever it is reasonable and practicable to do so; (3) recognizes the cost for future insurance claim payments when the insurance is extended; and (4) provides a mechanism for establishing reserves to pay those costs. In short, because of the time lag between the extension of an insurance commitment, the collection of premiums, and the payment of claims, measuring the financial condition of the flood insurance program by comparing annual premium income and losses creates a budgetary distortion. That distortion, together with the misinformation it conveys, could be reduced or eliminated by accrual- based budgeting. In our 1997 report, we pointed out that developing accrual-based budgets would be challenging, requiring the development of models to generate reasonably reliable cost estimates of the risks assumed by federal insurance programs. Nevertheless, the potential benefits to the flood insurance program, as well as other federal insurance programs, warrant the effort to develop these risk-assumed cost estimates. We suggested that the Congress consider encouraging the development and subsequent reporting of annual risk-assumed cost estimates for all federal insurance programs. At this time, the flood insurance program is still using cash- based budgeting for reporting its financial performance. We continue to believe that the development of accrual-based budgets for the flood insurance program would be a valuable step in developing a more comprehensive approach for reporting on the operations and real costs of this program. The National Flood Insurance Program has raised financial concerns because, over the years, it has lost money and at times has had to borrow funds from the U.S. Treasury. Two reasons—policy subsidies and payments for repetitive losses—have been consistently identified in our past work and by FEMA to explain financial challenges in the National Flood Insurance Program. First, the flood insurance program has sustained losses, and is not actuarially sound, largely because many policies in the program are subsidized. The Congress authorized the program to make subsidized flood insurance rates available to owners of structures built before a community’s FIRM was prepared. For a single- family pre-FIRM property, subsidized rates are available for the first $35,000 of coverage, although any insurance coverage above that amount must be purchased at actuarial rates. These pre-FIRM structures are generally more likely to sustain flood damage than later structures because they were not built according to the program’s building standards. The average annual premium for a subsidized policy is $637, representing about 35-40 percent of the true risk premium for these properties. According to flood insurance program officials, about 29 percent of the 4.4 million policies in force are currently subsidized. Although this percentage of subsidized policies is substantially lower than it was in the past, it still results in a significant reduction in revenues to the program. Program officials estimate that the total premium income from subsidized policyholders is currently about $500 million per year less than it would be if these rates had been actuarially based and participation remained the same. Originally, funds to support subsidized premiums were appropriated for the flood insurance program; however, since the mid-1980s no funds have been appropriated, and the losses resulting from subsidized policies must be borne by the program. As we reported in July 2001, increasing the premiums charged to subsidized policyholders to improve the program’s financial health could have an adverse impact. Elimination of the subsidy on pre-FIRM structures would cause rates on these properties to rise, on average, to more than twice the current premium rates. Program officials estimate that elimination of the subsidy would result in an annual average premium of about $1,300 for pre-FIRM structures. This would likely cause some pre- FIRM property owners to cancel their flood insurance. Cancellation of policies on these structures—which are more likely to suffer flood loss— would in turn increase the likelihood of the federal government having to pay increased costs for flood-related disaster assistance to these properties. The effect on the total federal disaster assistance costs of phasing out subsidized rates would depend on the number of policyholders who would cancel their policies and the extent to which future flood disasters affecting those properties occurred. Thus, it is difficult to estimate whether the increased costs of federal disaster relief programs would be less than, or more than, the cost of the program’s current subsidy. In addition to revenue lost because of subsidized policies, significant costs to the program result from repetitive loss properties. According to FEMA, about 38 percent of all claims historically, and about $200 million annually, represent repetitive losses—properties having two or more losses greater than $1,000 within a 10-year period. About 45,000 buildings currently insured under the program have been flooded on more than one occasion and have received flood insurance claims payments of $1,000 or more for each loss. Over the years, the total cost of these multiple-loss properties to the program has been about $3.8 billion. Although repetitive loss properties represent about one-third of the historical claims, these properties make up a small percentage of all program policies. A 1998 study by the National Wildlife Federation noted that repetitive loss properties represented only 2 percent of all properties insured by the program, but they tended to have damage claims that exceeded the value of the insured structure and most were concentrated in special flood hazard areas. For example, nearly 1 out of every 10 repetitive loss homes has had cumulative flood loss claims that exceeded the value of the house. Furthermore, over half of all nationwide repetitive loss property insurance payments had been made in Louisiana and Texas. About 15 states accounted for 90 percent of the total payments made for repetitive loss properties. Not only does the National Flood Insurance Program face challenges with its financial condition, but also in achieving one of the purposes for which it was created—to make flood insurance the mechanism for property owners to cover flood losses. Participation rates—the percentage of structures in special flood hazard areas that are insured—provide a measure to indicate the degree to which the owners of properties vulnerable to flooding are protected from financial loss through insurance, the financial risk to the government from flood-related disaster assistance is decreasing, and the program is obtaining high levels of premium income. The rate of participation in the program, however, may be low. In its fiscal year 2004 budget request, the administration noted that less than half of the eligible properties in flood areas participate in the program, a participation rate that was significantly lower than the nearly 90 percent participation rate for wind and hurricane insurance in at-risk areas. No comprehensive data are available to measure nationwide participation rates. However, various studies have identified instances where low levels of participation existed. For example: A 1999 DeKalb County, Georgia, participation study determined that of over 17,000 structures in the special flood hazard areas, about 3,100—18 percent—had flood insurance. A 1999 FEMA post-disaster study of 11 counties in Vermont found that 16 percent of homes sampled in the special flood hazard areas had flood insurance. A 1999 study by the Strategic Advocacy Group of two counties in Kentucky that had experienced flood disasters found that flood insurance was in force for 52 percent of homes mortgaged since 1994 and was in force for 30 percent of homes mortgaged before 1994. An August 2000 FEMA Inspector General study that noted that statistics from North Carolina showed that of about 150,000 structures in special flood hazard areas, 33 percent were covered by flood insurance. FEMA estimates that one-half to two-thirds of those structures in special flood hazard areas do not have flood insurance coverage, because the uninsured owners either are not aware that homeowner’s insurance does not cover flood damage or do not perceive the serious flood risk to which they are exposed. One area of flood insurance participation that should not be of concern, yet is, are those properties for which the purchase of flood insurance is mandatory. Flood insurance is required for properties located in flood- prone areas of participating communities for the life of mortgage loans made or held by federally regulated lending institutions, guaranteed by federal agencies, or purchased by government-sponsored enterprises. No definitive data exist on the number of mortgages meeting these criteria; however, according to program officials, most mortgages made in the country meet the criteria, and for those in a special flood hazard area, the property owners would have to purchase and maintain flood insurance over the life of the loan. The level of noncompliance with this mandatory purchase requirement is unknown. As we reported in June 2002, federal banking regulators and government-sponsored enterprises believe noncompliance is very low on the basis of their bank examinations and compliance reviews. Conversely, flood insurance program officials view noncompliance with the mandatory purchase requirement to be significant, based on aggregate statistics and site-specific studies that indicate that noncompliance is occurring. Neither side, however, is able to substantiate its differing claim with statistically sound data that provide a nationwide perspective on noncompliance. Data we collected and analyzed for our June 2002 report help address some concerns with the issue of noncompliance, but the issue remains unresolved. We analyzed available flood insurance, mortgage purchase, and flood zone data to determine whether noncompliance was a concern at the time of loan origination. Our analysis of mortgage and insurance data for 471 highly flood-prone areas in 17 states showed that, for most areas, more new insurance policies were purchased than mortgages issued, which suggests noncompliance was not a problem in those areas at the time of loan origination. However, data to determine whether insurance is retained over the life of loans are unavailable, and this issue remains unresolved. There are indications that some level of noncompliance exists. For example, an August 2000 study by FEMA’s Office of Inspector General examined noncompliance for 4,195 residences in coastal areas of 10 states and found that 416—10 percent—were required to have flood insurance but did not. Flood insurance program officials continue to be concerned with required insurance policy retention and are working with federal banking regulatory organizations and government-sponsored enterprises to identify actions that can be taken to better ensure borrowers are required to renew flood insurance policies annually. The administration and the Congress have recognized the challenges facing the flood insurance program and have proposed actions to improve it. These actions include the following: Reducing or eliminating subsidies for certain properties. In the fiscal year 2004 budget request, the administration proposed ending premium subsidies for second homes and vacation properties. According to flood insurance program officials, this change would affect 30 percent of the properties currently receiving subsidized premiums and would increase revenue to the program by $200 million annually. Additionally, program officials plan to increase the rates on all subsidized properties by about 2 percent in May 2003. Changing premium rates for repetitive loss properties. Two bills—H.R. 253 and H.R. 670—have been introduced to amend the National Flood Insurance Act of 1968 that would, among other things, change the premiums for repetitive loss properties. Under these bills, premiums charged for such properties would reflect actuarially based rates if the property owner has refused a buyout, elevation, or other flood mitigation measure from the flood insurance program or FEMA. Improving efforts to increase program participation. The administration has identified three strategies it intends to use to increase the number of policies in force: expanded marketing, program simplification, and increasing lender compliance. With regard to lender compliance, DHS plans to conduct an education effort with financial regulators about the mandatory flood insurance requirements for properties with mortgages from federally regulated lenders. Additionally, DHS plans to evaluate the program’s incentive structure to attract more participation in the program. Conducting a remapping of the nation’s flood zones. Many of the nation’s FIRMs are old and outdated, and for some communities FIRMs have never been developed. The administration has initiated a multiyear, $1 billion effort to map all flood zones in the country and reduce the average age of FIRMs from 13 to 6 years. While we have not fully analyzed these actions, on the basis of a preliminary assessment, they appear to address some of the challenges to the flood insurance program, including two of the key challenges—the program’s financial losses and the perceived low level of participation in the program by property owners in flood-prone areas. Reducing subsidies and repetitive loss properties has the potential to help improve the program’s financial condition, and increasing program participation would better protect those living in at-risk areas and potentially lower federal cost for disaster assistance after flood events. However, as mentioned earlier, actions such as increasing premiums to subsidized policyholders could cause some of these policyholders to cancel their flood insurance, resulting in lower participation rates and possibly raising federal disaster assistance costs. The remapping of flood zones could potentially affect both participation rates and the program’s financial condition. Remapping could identify additional properties in special flood hazard areas that do not participate in the program and for which DHS will need to undertake efforts to encourage their participation in the program. Further, these additional properties may not meet the program’s building standards since they were built before the FIRM that included properties in the special flood hazard area was developed. This could cause the program to offer subsidized insurance rates for these properties, potentially exacerbating the losses to the program resulting from subsidized properties. At the Subcommittee’s request, we have begun a review to examine the remapping effort and its effects, and will report on the results later this year. None of these proposals, however, addresses the need to move the program’s current cash-based budgeting for presenting the program’s financial condition to accrual-based budgeting. As we noted earlier, the current method of budgeting does not accurately portray the program’s financial condition and does not allow the program to create reserves to cover catastrophic losses and be actuarially sound. If a catastrophic loss occurs, this may place the program in the position of again having to borrow substantial sums from the Treasury in order to satisfy all claims losses. One additional challenge facing the flood insurance program relates to its placement in DHS. As we discussed in a January 2003 report on FEMA’s major management challenges and program risks, the placement in DHS of FEMA and programs such as flood insurance that have missions not directly related to security represents a significantly changed environment under which such programs will be conducted in the future. DHS is under tremendous pressure to succeed in its primary mission of securing the homeland, and the possibility exists that the flood insurance program may not receive adequate attention, visibility, and support as part of the department. For the flood insurance program to be fully successful, it will be important for DHS management to ensure that sufficient management capacity and accountability are provided to achieve the objectives of the program. In this regard, the President’s fiscal year 2004 budget request notes that additional reforms to the flood insurance program are being deferred until the program is incorporated into DHS. This incorporation has now occurred, and congressional oversight—such as through hearings like this one today—should help to ensure that DHS maintains appropriate focus on managing and improving the flood insurance program and championing the reforms necessary to achieve the program’s objectives. For further information on this testimony, please contact JayEtta Z. Hecker at (202) 512-2834 or William O. Jenkins at (202) 512-8777. Individuals making key contributions to this testimony included Christine E. Bonham, Lawrence D. Cluff, Kirk Kiester, John T. McGrail, and John R. Schulze. | Floods have been, and continue to be, the most destructive natural hazard in terms of economic loss to the nation. The National Flood Insurance Program is a key component of the federal government's efforts to minimize the damage and financial impact of floods. The program identifies flood-prone areas of the country, makes flood insurance available in the nearly 20,000 communities that participate in the program, and encourages flood-plain management efforts. Since its inception in 1969, the National Flood Insurance has provided $12 billion in insurance claims to owners of flood-damaged properties, and its building standards are estimated to save $1 billion annually. The program has been managed by the Federal Emergency Management Agency, but along with other activities of the agency, it was recently placed into the Department of Homeland Security. GAO has issued a number of reports on the flood insurance program and was asked to discuss the current challenges to the widespread success of the program. The program faces the following challenges in operating the program effectively and protecting property owners from loss from floods. Improving information on the program's financial condition: Cash-based budgeting, which focuses on the amount of funds that go in and out of a program in a fiscal year, obscures the program's costs and does not provide information necessary to signal emerging problems, such as shortfalls in funds to cover the program's risk exposure. Accrual-based budgeting better matches revenues and expenses, recognizes the risk assumed by the government, and has the potential to overcome the deficiencies of cash-based budgeting. Reducing losses to the program resulting from policy subsidies and repetitive loss properties: The program has lost money and is not actuarially sound because about 29 percent of the policies in force are subsidized but appropriations are not provided to cover the subsidies. Owners of structures built before the flood zone was included in the program pay reduced premiums that represent only about 35-40 percent of the true risk premium. Further, repetitive loss properties--properties with two or more losses in a 10-year period--add to program losses as they represent 38 percent of claims losses but account for 2 percent of insured properties. Increasing property owner participation in the program: The administration has estimated that less than 50 percent of eligible properties in flood plains participate in the program. Additionally, even when the purchase of insurance is mandatory, the extent of noncompliance with the mandatory purchase requirement is unknown and remains a concern. Actions have been initiated or proposed by the administration or in the Congress to address some of the challenges. However, the affect of some actions on the program is not clear. For example, reducing subsidies may cause some policyholders to cancel their policies, reducing program participation and leaving them vulnerable to financial loss from floods. Further, placement of the program within the Department of Homeland Security has the potential to decrease the attention, visibility, and support the program receives. |
A generally accepted evaluation criterion is that any comparative study of private and public prisons should be based upon the selection and analysis of similar facilities. For example, the private and public prisons selected for comparison should be as similar as possible regarding design and capacity, security level, and types of inmates. Otherwise, any comparative analysis of operational costs or quality of service could be skewed. On a per-inmate basis, for instance, higher security prisons can be expected to have higher operating costs than lower security prisons because the former type of facilities generally have higher staff-to-inmate ratios. Even if similar private and public prisons are available for study, a comparison of operational costs can still present difficulties in ensuring that all costs, direct and indirect, are consistently and fully quantified. Possible difficulties can arise due, in part, to differences in budgeting and accounting practices between and even within the private and public sectors. Determining the appropriate allocation of corporate headquarters overhead and government agency overhead, for instance, can be particularly difficult. Comparing the quality of service at private and public prisons also presents challenges and, in fact, can be more difficult than comparing costs. The concept of “quality” is neither easily defined nor measured. For example, although the American Correctional Association (ACA) sets accreditation standards for prisons, accredited facilities can vary widely in terms of overall quality. According to ACA officials, such variances occur because ACA accreditation means that a facility has met minimum standards. Generally, however, assessments of quality can take several approaches. For example, one is a compliance approach, that is, assessing whether or to what extent the prisons being compared are in compliance with applicable ACA standards and/or other relevant policy and procedural guidelines and/or court orders and consent decrees. Another approach is to assess performance measures. For example, measures of safety could include assault statistics, safety inspection results, and accidental injury reports. The difficulties of comparatively assessing private and public prisons—regarding operational costs and/or quality of service—are further complicated if the prisons are not located in the same state. Each state and its correctional system have characteristics and conditions that must be recognized in conducting interstate analyses. For example, economic conditions and cost-of-living factors can vary by state and by regions of the nation. Similarly, each state’s correctional system may be somewhat unique regarding the extent of overcrowding, the history of court intervention, the emphasis given to ACA accreditation, and the presence or influence of various other factors. As an illustration, with respect to the five studies we reviewed, appendix III presents state-specific details regarding some relevant factors that could affect interstate comparisons of prison costs and/or quality of service. On the basis of extensive literature searches and inquiries with knowledgeable corrections officials and criminal justice researchers (see app. I), we identified five studies completed since 1991 that compared private and public prisons in reference to operational costs and/or quality of service. The following is a brief overview of each study: Texas study (1991): Conducted by the Texas Sunset Advisory Commission, this study compared (1) the actual costs of operating four privately managed prerelease minimum-security facilities for male prisoners and (2) the estimated costs of operating similar but hypothetical public facilities in Texas. The study did not empirically assess quality of service. New Mexico study (1991): Funded by the National Institute of Justice, the Bureau of Prisons, and the National Institute of Corrections, this study compared the quality of service at three multicustody facilities (minimum- to maximum-security levels) for women, i.e., a private prison and a state-run prison in New Mexico and a federal prison in West Virginia. The study did not include a detailed analysis of operational costs. California study (1994): Conducted by California State University with funding from the California Department of Corrections, this study focused on three community correctional facilities for males. All three facilities were operated (under contracts with the state) as for-profit alternatives to state-operated prisons. One facility (medium-security) was operated by a private corporation, the second (high-security) by a local police department, and the third (low- to medium-security) by a city administration. The study compared operational and construction costs and quality of service. More specifically, regarding costs, the study compared the three facilities (1) with one another and (2) with other state correctional facilities. Both operational and construction costs were included in the comparison of the three facilities with one another. The statewide comparison did not include construction costs for the California Department of Corrections facilities. Regarding quality of service, the study compared the three facilities with two state facilities. Tennessee study (1995): Conducted in two parts, one for operational costs and one for quality of service, by the Tennessee state legislature, this study compared three of Tennessee’s multicustody (minimum- to maximum-security) prisons for male inmates. One prison was privately managed, and the other two were state-run prisons. Washington study (1996): At the time of this study, the state of Washington had no privately run prisons but was considering the feasibility of such. Therefore, the study, conducted by the Washington State Legislative Budget Committee, analyzed pertinent information available in other states. Regarding operational costs, for, example, the study looked at the three Tennessee facilities (mentioned above) as well as three multicustody male prisons in Louisiana (two private and one public). Regarding quality of service, the study compared the three Tennessee facilities, the three Louisiana facilities, and two Washington facilities. In summary, the California, Tennessee, and Washington studies assessed operational costs and quality of service. The Texas study analyzed operational costs only, and the New Mexico study analyzed quality of service only. While the five studies varied in terms of methodological rigor, they do, to differing degrees, offer some indication of comparative operational costs and/or quality of service in the specific settings they assessed. However, regarding operational costs, because the studies reported little difference and/or mixed results in comparing private and public facilities, we could not conclude whether privatization saved money. Similarly, regarding quality of service, of the two studies that made the most detailed comparative assessments, one study (New Mexico) reported equivocal findings, and the other study (Tennessee) reported no difference between the compared private and public facilities. Four of the five studies (Texas, California, Tennessee, and Washington) assessed operational costs of private and public correctional facilities. In three of the studies (California, Tennessee, and Washington), comparisons of private and public facilities indicated little or some differences in operational costs. Only the Texas study reported finding substantially lower (14- to 15-percent) operational costs for private versus public correctional facilities. Using fiscal year 1990 data, the Texas study reported average daily operational costs of $36.76 per inmate for the private facilities, compared with estimates of $42.70 to $43.13 for the public facilities. However, the results of the Texas study are not fully based on actual experience. Rather, the study compared existing private facilities (prerelease institutions) to hypothetical public facilities. This type of hypothetical comparison does not allow for consideration of any unanticipated changes in components such as staffing levels, other expenses, rate of occupied bed space, or many other factors that could affect actual costs. Changes in any single assumption, or set of assumptions, for the hypothetical institutions could change the size or even the direction of the differences in the comparative operational costs. Based upon our experience in designing and assessing evaluation methodologies, we found the Tennessee study (of the studies we reviewed) to have the most sound and detailed comparison of operational costs of private and public correctional facilities. The study compared three mixed-population (minimum- to maximum-security) institutions (one private and two public). All three facilities were located in Tennessee, and all three had relatively comparable inmate populations, in terms of numbers and most demographics, except race. Also, direct and indirect costs were considered in the analysis, and representatives from both the private and the public facilities agreed on the cost components and relevant adjustments prior to data collection. The analysis showed very little difference in average inmate costs per day among the three facilities—$35.39 for the private facility and $34.90 and $35.45, respectively, for the two public facilities. The Washington study, which made intrastate comparisons of correctional facilities (minimum- and maximum-security populations) in Tennessee and Louisiana, also found very little difference in the operational costs of private and public facilities. For Tennessee, the private facilities’ average daily operational costs per inmate ($33.61) were lower (about 7 percent) than the comparable costs for the two public facilities studied ($35.82 and $35.28, respectively). It should be noted that the Tennessee facilities, which were analyzed and reported on in the 1996 Washington state study, were the same facilities that are discussed in the 1995 Tennessee study cited above. For Louisiana, the average inmate costs per day for the two private facilities studied were $23.75 and $23.34, respectively, and the comparable daily operational costs for the public facility studied were $23.55 per inmate. The 1994 California study compared three for-profit community correctional facilities located in that state—one run by a private firm and two run by local governments. The study found that the private facility’s average annual costs per inmate ($15,578) were higher than comparable costs for one of the government-run facilities ($13,195) but were lower than such costs for the other government-run facility ($16,627). The lower cost government-run facility had a disproportionate share of drug offenders, which could have affected overall costs. Further, the authors of the study noted that the results of this study must be viewed with some additional caution because of inconsistencies in the underlying or supporting cost figures obtained from different sources within the state. Although comparative costs are very important, they are not the only factors considered by policymakers in deciding the direction or extent of corrections privatization. A principal concern is whether private contractors can operate at lower costs to the taxpayers, while providing the same or even a better level of service as the public sector, particularly with respect to safety and security issues. Of the studies we reviewed, two (New Mexico and Tennessee) assessed the comparative quality of service between private and public institutions in much greater detail than the other studies. Both studies used structured data-collection instruments to cover a variety of quality-related topics, including safety and security, management, personnel, health care, discipline reports, escapes, and inmate programs and activities. The New Mexico study reported equivocal findings, and the Tennessee study reported no difference in quality between the compared private and public institutions. The findings in the New Mexico study are difficult to interpret. On the basis of surveys of correctional staff and reviews of institutional records, the study reported that the private prison “outperformed” the public facilities on most of the measured quality dimensions. However, the author noted that the results from one of the data-collection instruments—the inmate surveys—showed an opposite result, with one of the public facilities “outperforming” the private facility on every dimension except inmate activities (e.g., work and training programs). The Tennessee study, in assessing the quality of service at one private and two public prisons, reported that “all three facilities were operated at essentially the same level of performance.” This conclusion was largely based on the results of an operational audit conducted at each of the facilities by an inspection team. Composed of private and public sector members, the team used a structured survey instrument to conduct a detailed review of records, observe operations and practices, and conduct interviews. The Texas study did not empirically assess the quality of service at the private correctional facilities. Rather, the study noted that all four of the privately operated prerelease facilities were in general compliance with 11 of the 16 mandates of court rulings applicable to Texas prisons. Also, the study noted that two of the four private facilities had received ACA accreditation, and the other two were still involved in the accreditation process. ACA officials told us, however, that ACA accreditation means that a facility has met minimum standards and that accredited facilities can vary widely in terms of overall quality. To reiterate, because it was based on hypothetical public facilities, the Texas study made no attempt to comparatively assess the quality of service across private and public facilities in Texas. The California study, in assessing quality of service, used inmate and staff surveys to compare the three community correctional facilities with two state prisons. However, the results could not be generalized to the inmate or staff populations of the respective facilities because small, nonrandom samples were used. The California study also attempted to compare the three community correctional facilities with the state’s other correctional institutions with respect to recidivism rates. The study reported that, of the three community correctional facilities, one of the publicly managed facilities was “most impressive” in performance based on recidivism rates. Sufficient data were not available to adequately complete the analysis comparing the inmates released from the community correctional facilities with inmates released from other correctional institutions in the state. The Washington study assessed the quality of service at the three facilities (one private and two public) in Tennessee, three facilities (two private and one public) in Louisiana, and two facilities (both public) in Washington. While not as detailed as the New Mexico and the Tennessee studies, the Washington study concluded that the private and public prisons studied within the respective states (Tennessee and Louisiana) were generally similar in quality of service. However, the study noted that Washington’s two state-run facilities had more counselors per inmate than the other states’ facilities. The few studies that have compared the operational costs and/or the quality of service of private and public prisons provide little information that is widely applicable to various correctional settings. For example, while these studies compared private and public facilities that generally were similar (in terms of capacities, inmate demographics, etc.), the selected facilities were not necessarily typical or representative of prisons in either the state studied or other jurisdictions. Also, a variety of factors that relate to a given location or correctional system may render the experience of one jurisdiction with private prisons very different from that of another. Further, the passage of time could alter the relationship between private and public correctional facilities in terms of costs and quality. For these reasons, among others, the few studies that we reviewed do not permit drawing generalizable conclusions about the comparative operational costs and/or quality of service of private and public prisons. Jurisdictions, such as states, vary on several dimensions that could have an impact on the comparative costs of private versus public prisons and, in turn, affect the generalizability of a given study’s results. First, other states’ correctional philosophies could differ from that of the states studied. Some state correctional philosophies are more punitive in nature (as reflected, for example, by higher incarceration rates), whereas other states are less punitive and more inclined toward treatment. California, for example, which is one of the states discussed in the five studies we reviewed, generally has had incarceration rates above the national average. Also, the Washington study noted that the adjusted estimated per-bed costs for a state-run facility in Washington ($60,400) were almost double Florida’s costs ($33,900) due, in part, to state differences in operating and programming approaches. Second, jurisdictions also vary in relation to a variety of economic factors that could affect the relationship between private and public prison costs. Differences in the costs of living could affect both private and public prison costs, but in some jurisdictions, one more than the other. For example, a labor shortage could result in higher operational costs for private and public prisons. Third, in some jurisdictions, the inmate population to be incarcerated in private facilities may be different from those inmate populations in the five studies. Three of the five studies focused on inmate populations that were not representative of the broader prison population—prerelease prisoners (Texas study), female prisoners (New Mexico), and those housed in community correctional facilities (California). Only two studies (Tennessee and Washington) focused on costs in relation to facilities housing a more mainstream prisoner population. Finally, regarding both operational costs and quality of service, the comparative performance of private versus public correctional facilities is not likely to be static. Changes over time could alter the comparative performance. For example, the first year of a new prison—either private or public—could reflect expenses for training inexperienced staff as well as hiring replacements for those unsuited to the work. Inexperienced staff could also have a negative effect on some measures of quality. Also, in the initial years of managing a prison, a private firm may choose to bill for its services at rates below costs to obtain or extend a contract. As time goes by, however, to remain a viable business entity, the contractor’s cost-recovery practices would have to change. Similarly, over time, public prisons could become more cost efficient in response to competition from the private sector. For instance, this conclusion was reached by the Washington study, which was commissioned to help the state determine the potential benefits of privatization. The results of studies comparing private and public prisons obviously are of interest to any jurisdiction whose policymakers are deciding whether or to what extent corrections should be privatized. Ideally, to be most useful, such studies should be based upon representative samples of prisons, with sufficient statistical controls in place to measure and account for any differences. However, because the number of private correctional facilities is still relatively small (see app. IV)—and, given the fact that each stand-alone facility (whether private or public) may have some unique characteristics—conducting a truly optimal comparative evaluation may be impractical. Nonetheless, the five studies completed since 1991 offer several lessons learned to guide future studies, even if such studies focus on comparing only one private facility and one public facility. In reviewing the relative strengths and weaknesses of each study to formulate lessons learned, we largely relied on our extensive experience in designing and assessing evaluation methodologies—that is, our experience with generally accepted methodological standards and practices. Specifically, on the basis of our review of the five studies, we identified the following lessons learned: In considering the extent to which corrections should be privatized, a key question is whether private contractors can operate at lower costs to taxpayers, while providing at least the same level of service as the public sector, particularly with respect to security and safety issues. Thus, it is important that any study focus on both operational costs and quality of service. Two of the studies we reviewed (Texas and New Mexico) did not have this dual focus. The best approach for evaluating operational costs is to study existing comparable facilities, not hypothetical facilities. One of the studies we reviewed (Texas) used hypothetical similar public facilities. Generally, there is more than one way to objectively measure or compare prison security, safety, order, and various other dimensions that constitute quality of service. In this regard, it is important to use multiple indicators or data sources to provide cross-checks. The New Mexico study, for example, illustrates that divergent results can be reached by using one data source (e.g., inmate surveys) versus another source (e.g., staff surveys). Comparative findings with respect to operational costs and/or quality of service in any given year may not hold true for other years. Similarly, because trends are not self-perpetuating, even findings based on multiyear comparisons must be carefully considered. Nonetheless, all other factors being equal, comparative evaluations based upon several years’ data potentially have more value than evaluations based upon 1 or 2 years of data. Nearly all five of the studies we reviewed were based upon 1 or 2 years of data. “I know that the Attorney General and . . . are very interested in working carefully with us in the Bureau of Prisons to track, on these new contracts, very carefully, what the cost impact truly is, because there are a lot of hidden costs in privatization . . . here has never been, we don’t believe, a real good cost analysis to determine, apples to apples, what is the cost of a traditional prison system and private contracting. The private contractors claim they can do it at great savings, and so we are very interested in monitoring the ones that we have projected for the next few years and determining . . . how well the taxpayers are being served on either side.” The BOP Director noted that by contracting out the management of selected facilities incarcerating general inmate populations, BOP was moving to the “next level of privatization,” which would provide a good basis for comparative evaluations focusing on “like or similar institutions.” In this regard, the lessons learned from previous comparative studies should be useful to BOP if the federal privatization initiative is revisited. We obtained oral comments on a draft of this report from BOP and written comments from the Department of Justice’s Office of Justice Programs; the National Council on Crime and Delinquency; and a Northeastern University (Boston, MA) professor of criminal justice, who is a nationally recognized authority on corrections administration. BOP commented that the report was accurate, well done, and useful. The Office of the Assistant Attorney General, Office of Justice Programs, concurred with the report and noted that additional study of the privatization of correctional facilities is needed. The National Institute of Justice, a component agency of the Office of Justice Programs, commented that the report appeared “to be as comprehensive as the available data permits.” Also, the Institute commented that the report’s discussion of the strengths and weaknesses of the five studies “is excellent.” In commenting on the draft, the Executive Vice President, National Council on Crime and Delinquency, said that the report is accurate in concluding that few studies have been completed to date and that these studies have methodological problems that limit understanding the actual cost-benefits of privatization. He noted, however, that our report could place more emphasis on the Tennessee study, which is the most rigorous study to date. Although we concur with the reviewer’s assessment of the study, our objective was to provide similar information for each of the studies reviewed. Further, he noted that the report could add more emphasis to evaluating the claims of private providers that they can construct new facilities faster and cheaper than public entities. Since the studies reviewed did not assess these claims, this issue was beyond the scope of our work. The Northeastern University reviewer commented that (1) our evaluation synthesis was an important contribution to the corrections field, (2) the report’s conclusion that the five studies offer little generalizable guidance for other jurisdictions regarding the comparative cost and quality of service of private and public correctional facilities was “right on point,” and (3) our cautions concerning interstate comparisons were “well-founded.” However, the reviewer underscored the need also to focus privatization research on crime reduction and various philosophical questions underpinning the privatization debate. These issues were beyond the scope of our work. In addition, the reviewer suggested that it would be valuable to the corrections field if the report included a short, concise statement describing the critical dependent and independent variables that should be considered in comparative analyses of private and public corrections facilities. Because of variations in available data and possible measurement adjustments required in specific research situations, we are hesitant to prescribe what variables should be studied. The studies reviewed, however, suggest possible variables. Furthermore, citing the difficulties researchers have in accessing data from private firms, the reviewer proposed that the report contain a recommendation that would facilitate researchers’ access to proprietary information needed for evaluation of private corrections. In the case of federal corrections-related contracts, we would likely have access to data, but we have considerably less jurisdiction at the state level. We are providing copies of this report to the Chairman and Ranking Minority Member of the House Judiciary Committee, the Attorney General; the Director, BOP; and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix V. Please contact me on (202) 512-8777 if you or your staff have any questions. In initiating this review, our specific objectives were to answer the following key questions: What studies, completed since 1991, have compared the operational costs and/or the quality of service of private and public prisons? From these studies, what can be concluded with respect to the operational costs and/or the quality of service of comparable private and public facilities? Are the results of these studies generalizable to correctional systems in other jurisdictions? From these studies, what are the “lessons learned” to help guide future comparative studies of private and public prisons? To identify relevant studies, we requested and obtained literature searches from the National Criminal Justice Reference Service and the National Institute of Corrections. Also, within the Department of Justice, we contacted knowledgeable officials of the component agencies responsible for managing federal correctional facilities—the Federal Bureau of Prisons (BOP), U.S. Marshals Service, and Immigration and Naturalization Service. Similarly, to query knowledgeable state agency officials, we first contacted the Director of the Private Corrections Project at the University of Florida’s Center for Studies in Criminology and Law to obtain information about the number of privatized facilities in each state (see app. IV). Then, for each applicable state, we contacted officials at the state’s corrections department and/or corrections research agency and inquired about any completed, ongoing, or planned studies comparing private and public prisons. Further, we contacted the National Council on Crime and Delinquency and various nationally recognized researchers in academia. Assessment of Studies We assessed each of the five relevant studies that we identified. Our work in assessing the conclusions or results of each study and the generalizability of same—as well as identifying any lessons learned—can be characterized as a form of evaluation synthesis. By definition, an evaluation synthesis is a systematic procedure for organizing findings from several disparate evaluation studies. That is, the procedure addresses key questions or issues by assessing existing studies or evaluations, rather than by conducting primary data collection. In reviewing the studies, we focused on the findings and conclusions of each study and evaluated these in relation to the methodology used by the respective study. As an initial and fundamental inquiry, for instance, we focused on the similarity of the private and public facilities being compared in each study. That is, we wanted to determine (1) if the facilities were reasonably well matched in relation to design and capacity, security level, inmate demographics, and other relevant institutional characteristics and (2) whether the facilities were actually in operation and, if so, for what length of time prior to the comparative evaluation. In reviewing each applicable study’s comparative evaluation of operational costs, we focused on whether (1) both direct and indirect cost components were considered for the private and public facilities, (2) actual data versus estimates were used, and (3) consistent cost components were used. We did not independently verify any of the cost data presented in the studies. In reviewing each applicable study’s comparative evaluation of quality of service, we focused on whether the private and the public facilities were consistently evaluated in the respective study. That is, in reference to both the private and the public facilities compared in a given study, we were interested in whether the same or similar methodology and data sources were used to evaluate quality of service. Thus, we did not attempt to generically define “quality of service;” rather, we accepted the definition and/or evaluation criteria used in each applicable study. Also, we did not independently verify the reported quality measures or outcomes, such as safety and incident data and the extent of rehabilitation and treatment programs for inmates. We reviewed the relative strengths and weaknesses of each study to formulate lessons learned for future comparative studies. In doing so, we largely relied on our extensive experience in designing and assessing evaluation methodologies—that is, our experience with generally accepted methodological standards and practices. Initially, in September 1995, to obtain a practical understanding of privatization issues, we visited two privately operated facilities housing federal inmates. These facilities, which held deportable aliens, are located in west Texas and were operated by private firms under the general authority of intergovernmental agreements entered into by BOP and the respective city governments of Big Spring and Eden. We toured the facilities and interviewed managers and staff of the private firms. Also, we interviewed the on-site federal monitors. Further, to obtain additional overview information on privatization issues, we interviewed a senior executive (the Director for Strategic Planning) of one of the nation’s largest private corrections firms. This official was a former director of BOP. We identified five studies completed since 1991 that compare private and public correctional facilities in relation to operational costs and/or quality of service. Table II.1 briefly describes each of these studies. Following the table, separate sections respectively provide more details about each study. Table II.1: Citation, Evaluation Parameters, and Reported Results of Five Studies Comparing Private and Public Prisons “Information Report on Contracts for Correction Facilities and Services,” Recommendations to the Governor of Texas and Members of the Seventy-Second Legislature, Sunset Advisory Commission, Final Report, Texas Sunset Advisory Commission (Austin: 1991). Operational costs were studied. Four private, prerelease, minimum- security prisons (500 beds each) for males were compared with hypothetical public facilities in Texas. The private prisons’ operational costs were 14 to 15 percent less than the costs of the hypothetical state facilities. Fiscal year 1990 data were analyzed. An empirical assessment of quality of service was not conducted. The study noted that two of the four private facilities had received ACA accreditation, and the other two were still involved in the accreditation process. Also, the study noted that all four facilities were in compliance with 11 of the 16 mandates of court rulings applicable to Texas prisons. Charles H. Logan, Well Kept: Comparing Quality of Confinement In a Public and a Private Prison, National Institute of Justice (1991). A detailed analysis of operational costs was not conducted. Not applicable. Quality of service was studied. Three multicustody facilities (minimum- to maximum-security) for women were compared: a private prison and a state-run prison in New Mexico and a federal prison in West Virginia. The data analyzed for the private facility covered June 1989 through November 1989; data for the state facility covered June 1988 through November 1988; and the federal data covered December 1987 through May 1988. The results of the study depended on the data-collection instruments that were employed. For example, data from staff surveys and official records showed that the private prison “outperformed” the state and federal prisons across nearly all dimensions. However, inmate survey data showed that one of the public facilities “outperformed” the private facility on every dimension except “activity” (e.g., work and training programs). (continued) Dale K. Sechrest and David Shichor, Final Report: Exploratory Study of California’s Community Corrections Facilities, California State University (San Bernardino: 1994). Operational and construction costs were studied. Three for-profit community correctional facilities—one managed privately (medium-security) and two managed by local governments (low- to medium- and high-security)—for males were compared with one another and with other state correctional facilities. The private facility’s average annual costs per inmate ($15,578) were higher than comparable costs for one of the government-run facilities ($13,195) but were lower than such costs for the other government-run facility ($16,627). Fiscal year 1991-1992 data were analyzed. Due to methodological limitations, conclusions could not be reached by comparing the three for-profit community correctional facilities with other state correctional facilities. For example, different cost components were used for the two sets of facilities in the comparison. In addition, it is likely that the universe of the state’s correctional facilities reflected wide-ranging differences concerning inmate populations and services. Quality of service was studied. The same three community correctional facilities for males were compared with two state correctional facilities. Data were collected in summer 1992. Due to methodological limitations, conclusions could not be reached by comparing the community correctional facilities with two state facilities. For example, the results could not be generalized to the inmate or staff populations of the facilities because small, nonrandom samples were used. Cost Comparison of Correctional Centers, Tennessee Legislature Fiscal Review Committee (Nashville: 1995). Operational costs were studied. Three multicustody prisons (minimum- to maximum-security) for males were compared: one private and two state-run prisons. There was little difference in the average daily operational costs per inmate across the three facilities—$35.39 for the private facility, versus $34.90 and $35.45, respectively, for the two public facilities. Data from July 1993 through June 1994 were analyzed. Comparative Evaluation of Privately Managed CCA Prison and State-Managed Prototypical Prison, Tennessee Legislature Select Oversight Committee on Corrections (Nashville: 1995). Quality of service was studied. The same three multicustody prisons for males were compared. There was no difference in quality of service between the private and public facilities. Data from March 1991 through September 1994 were analyzed. Department of Corrections Privatization Feasibility Study, Report 96-2, State of Washington Legislative Budget Committee (Olympia: 1996). Operational costs of the same three Tennessee facilities mentioned above were studied. Data from July 1993 through June 1994 were analyzed. The average daily operational costs per inmate for the private facility ($33.61) were slightly lower than such costs for the two public facilities ($35.82 and $35.28, respectively). (continued) Operational costs of three multicustody facilities in Louisiana (two private and one state-run) for males were studied. The average daily operational costs per inmate for the two private facilities were $23.75 and $23.34, respectively, compared with $23.55 for the public facility. Projected data for July 1995 through June 1996 were analyzed. Operational costs of a Washington state prison were compared with the costs for one Tennessee state prison and a Louisiana state prison. The average daily operational costs per inmate for the Washington facility ($44.52) were higher than such costs for the Tennessee and Louisiana facilities ($37.07 and $24.04, respectively). Data analyzed for Washington state covered calendar year 1995; time frames for Tennessee and Louisiana data are mentioned above. Construction costs were studied. The estimated costs for Washington state to construct a planned multicustody public prison for males were compared with a private company’s costs for constructing a similar facility in Florida. The estimated cost per bed for the Washington state facility ($60,400) was approximately double the estimated cost per bed for the Florida facility ($33,900). Data analyzed for Washington state were based on projected July 1998 cost figures; and data analyzed for the Florida facility were based on projected July 1998 cost figures. Quality of service was studied. Three multicustody male facilities in Tennessee (mentioned above), three multicustody male facilities in Louisiana (mentioned above), and two multicustody male facilities in Washington were compared. Site visits showed that all prisons were “clean and appeared to be orderly.” Additional data indicated that the prisons generally were similar regarding quality of service. However, the Washington facilities had more counselors per inmate than the Tennessee and Louisiana facilities. Data analyzed for Tennessee covered 1994 (for review of institutional records) and 1995 (for on-site visits); data analyzed for Louisiana covered 1995; data analyzed for Washington covered 1995. Conducted by the Texas Sunset Advisory Commission, this study involved a comparative assessment of operational costs; an empirical assessment of quality of service was not conducted. The actual costs of operating four privately managed prerelease minimum-security facilities (500 beds each) for male prisoners were compared with the estimated costs of operating similar but “hypothetical” public facilities in the state of Texas. Two of the four prerelease facilities were managed by Corrections Corporation of America, and the other two were managed by Wackenhut Corrections Corporation. The study considered direct and indirect costs to compute operational costs for private and public sector management of the prerelease facilities. Direct costs included items such as salaries and fringe benefits, food, medical services, utilities, and supplies. Also, the study recognized that another direct cost would be the expense of having state corrections agency staff on-site to monitor the contractor’s performance. Indirect costs included salaries and expenses for corrections department executive personnel, an annual audit of facilities, and “other administration items” attributable to the private or state facilities. Construction costs were excluded because the state built the private facilities. Also, the study did not include depreciation expenses and capital outlays, but debt service for construction was included. The study concluded that the state achieved savings from the privatized facilities. Based on requirements specified in state statute, the state estimated the costs of operating similar state-run prerelease facilities. Specifically, because similar state-run facilities did not exist, the state estimated the costs of operating hypothetical state-run prerelease facilities. Contract provisions stipulated that contractors would receive at least 10 percent less than the estimated costs for the state to operate each facility. Therefore, in a sense, 10-percent “savings” to the state was guaranteed. Cost data for fiscal year 1990 were analyzed. The state estimated that the privatized facilities achieved 14- to 15-percent cost savings (taking into consideration tax revenues paid to state and local authorities) compared with hypothetically equivalent state-run facilities. The average daily operational costs per inmate for the private facilities were $36.76. Because of staffing and construction differences between the contractors, separate costs were estimated for the hypothetical state operation of a facility for each contractor. The estimated average daily operational costs per inmate for the hypothetical state-run facilities were $42.70 for one contractor and $43.13 for the other. However, because the state did not operate any prerelease facilities nor did any of its existing facilities have prerelease components, the cost estimates for the state-run facilities were not based on actual state experience. The method assumed no unanticipated changes in components such as salary and other expenses. Thus, an error in one or more assumption could have resulted in different cost estimates, changing the size or even the direction of estimated differences in private versus public management costs. An empirical assessment of the quality of service was not conducted due to the absence of comparable state facilities. However, the study noted that two of the four private facilities had received ACA accreditation, and the other two were still involved in the accreditation process. Additionally, the study noted that all four facilities were in general compliance with 11 of the 16 mandates of court rulings applicable to Texas prisons. Funded by the Department of Justice’s National Institute of Justice, Bureau of Prisons, and National Institute of Corrections, this study, of the five we reviewed, made the most systematic effort to address quality of service. However, a detailed cost analysis was not included. The study compared three multicustody (minimum- to maximum-security) women’s facilities—a privately run facility and a state-run facility in New Mexico and a federal facility in West Virginia—across eight dimensions of quality. Data analyzed for the private facility covered June 1989 through November 1989, data for the state facility covered June 1988 through November 1988, and data for the federal facility covered December 1987 through May 1988. The study recognized, at least indirectly, that differences among the facilities regarding age, architecture, and inmate programs made comparisons somewhat difficult to interpret. For example, the private facility was new, the state facility was 4 years old, and the federal facility was about 60 years old. The respective inmate populations were 170 (private), 143 (state), and 814 (federal). The inmates at the New Mexico facilities were nearly similar with respect to characteristics of age, race, and offense type. However, they differed from the federal inmates in race and offense type. The study indicated that one of the factors enhancing the comparability of the two New Mexico institutions was that both were applying for accreditation by the American Correctional Association (ACA). However, ACA officials told us that ACA accreditation means that only minimum standards are met, and since there can be wide variations among facilities in exceeding minimum standards, accreditation should not be used to assume that two or more facilities are comparable. In assessing and comparing the quality of service at the three facilities, the study derived multiple indicators for each of eight quality dimensions. Data sources for all three facilities included various institutional records,such as incident and disciplinary reports as well as work and education records. Also, staff surveys were conducted at all three facilities, and inmate surveys were conducted at the private and the state facilities. In total, the study made 595 comparisons among the institutions using 333 indicators. All of the indicators were available for the private and state prisons, while 131 indicators were available for the federal prisons. Thus, the study made three-way comparisons for 131 of the 333 indicators and two-way comparisons (private/state) for 202 of the indicators. The study concluded, generally, that “the private prison ‘outperformed’ the state and federal prisons, often by quite substantial margins, across nearly all dimensions.” It noted, however, that results varied by data source. For example, contrary to other sources used in the study, inmate survey data showed that the state facility “outperformed” the private facility in every dimension except “activity” (e.g., work and training programs). While the study did not include a detailed analysis of operational costs, it suggested that the better performance of the private facility was accomplished at lower cost. However, the study offered little evidence to support this assertion. For instance, the full report consisted of 291 pages, with only 2 pages devoted to costs. Without providing any detailed analysis, the report noted that the average daily operational costs per inmate for the private facility were $69.75 in fiscal year 1989-1990, the average daily costs of housing an inmate in federal facilities (nationwide) were $39.67 in 1988, the average daily costs for New Mexico state facilities (statewide) were $68.00 in 1988, and the average daily costs for the particular state facility studied were $80.00 in fiscal years 1988-1989. Although no detailed cost analysis was attempted, the study appeared to base the perception of lower costs of private facilities on the fact that “financial analysts in the New Mexico Corrections Department believed that the contract was saving the state money.” Conducted by California State University, this study was neither supportive nor critical of private facilities. To assess costs and quality of service, the study compared three for-profit community correctional facilities for males—one privately managed (medium-security) and two publicly managed by local governments (one low-to medium-security and one high-security). Specifically, the study compared the three facilities with one another and with other state correctional facilities. Both operational and construction costs were included in the comparison of the three facilities with one another. The statewide comparison did not include construction costs for the California Department of Corrections facilities. Also, the study attempted to compare the quality of service of the three facilities with two other California Department of Corrections facilities. The three community correctional facilities were generally comparable. For instance, the inmates were nearly similar with respect to characteristics of age, race, and offender status. However, one of the public facilities had a greater percentage of drug offenders and Anglo- and African-American inmates and a smaller percentage of Hispanics. The private facility housed 400 inmates, compared with about 420 and about 450 housed at the public facilities. For the period studied, the number of admissions was 1,498 for the private facility versus 392 and 1,073, respectively, for the two public facilities. The two public facilities were relatively new (operational by mid-1991), while the study referred to the private facility as “older.” Using fiscal year 1991-1992 data, construction and operational cost comparisons of the private and the two public facilities revealed some differences. For example, the study found that the private facility’s average annual costs per inmate ($15,578) were higher than comparable costs for one of the government-run facilities ($13,195) but were lower than such costs for the other government-run facility ($16,627). Construction and operational costs, including overhead and capitalization costs, were calculated for the three facilities. Costs were based on contracts negotiated with the facilities and included capitalization, lease, renovation, program development, and liability insurance. Costs that were not included in the calculations were Parole Division overhead costs (for community correctional facilities), state monitoring, medical costs allocated to the California Department of Corrections, inmate clothing, inmate pay, miscellaneous contracts, interest payments, and possible tax breaks. Also, there were inexplicable inconsistencies in the cost data obtained from two agencies within the California Department of Corrections. These inconsistencies may have affected the reliability of the cost estimates. Attempts to compare the costs of the three community correctional facilities and other state correctional facilities were not fully successful. The cost calculations for the other state facilities used different components than did the calculations for the community correctional facilities. The latter costs included construction costs; however, the data for the other state facilities did not. Therefore, these cost calculations were not directly comparable. Further, given the unique characteristics of community correctional facilities, the usefulness of comparing these facilities to all other correctional facilities in California—many of which are likely to be very different from the community-based facilities—is questionable. To assess quality of service, inmate and staff surveys were conducted at the three community correctional facilities and at two state prisons. However, due to small, nonrandom samples, the results could not be generalized to the inmate or staff populations at any of the facilities. The California study also attempted to compare the three community correctional facilities and the state’s other correctional institutions in reference to recidivism rates. The study reported that, of the three community correctional facilities, one of the publicly managed facilities was “most impressive” in performance based on recidivism rates. Sufficient data were not available to adequately complete the analysis comparing the inmates released from the community correctional facilities to inmates released from other correctional institutions in the state. In summary, the California study’s methodological limitations prohibit drawing any overall conclusions about quality of service. The study acknowledged that any future comparative studies in California should “incorporate more inclusive and better-selected survey samples.” The Tennessee state legislature conducted a two-part study. One part was a cost assessment, and the other was an assessment of quality. Overall, this effort was the most systematic attempt of all the studies we reviewed to assess both the costs and quality of service. Three multicustody (minimum- to maximum-security) prisons in Tennessee were compared—one privately managed prison (Corrections Corporation of America) and two state-run prisons. The facilities were generally comparable. All three were new (e.g., operational by mid-1992), and all had been accredited by ACA also had met other applicable professional standards. The inmates were similar on all demographic characteristics mentioned, except race. No information was provided on capacity level, but the institutions housed approximately the same numbers of inmates—private (961) and state (929 and 1,029). The study did not report any information regarding the inmate-to-staff ratios for the facilities. Similar criteria were used to compare the operational costs of the facilities. The cost components and relevant adjustments—for direct and indirect costs—were agreed to by all parties (private and public) prior to data collection. Direct costs included salaries and fringe benefits, food, professional services, equipment, maintenance, travel, utilities, and supplies. Also, there was a cost provision for state employees to monitor the private prison. Costs for medical and mental health services were excluded. Indirect costs included salaries and expenses for corrections department administration and overhead, and interest on working capital. Using data that covered July 1993 through June 1994, the study concluded that the costs of operating the private and both state facilities were virtually identical. Specifically, the comparison showed that the average daily operational costs per inmate for the private prison were $35.39, versus $34.90 and $35.45, respectively, for the two public prisons. The quality of service assessment consisted of three components—an audit (given a weight of 60 percent), a security and safety index (a weight of 25 percent), and a program and activity index (a weight of 15 percent). The study period for the quality of service assessment was March 1991 through September 1994. An operational audit was conducted at each of the facilities by an inspection team, consisting of selected staff from the Tennessee Department of Corrections and the Corrections Corporation of America. The staff had varying degrees of expertise in major functional areas, such as administration, safety and physical plant, health services, treatment, and security. The team used a structured survey instrument to conduct a detailed review of records, observe operations and practices, and conduct interviews. By using the survey instrument, the team attempted to assess compliance with the various programs and practices within each of the functional areas. Examples of those programs and practices were administration (e.g., fiscal management and affirmative action); safety and physical plant (e.g., fire and occupational safety and sanitation); health services (e.g., dental care and pharmacy services); treatment (e.g., inmate orientation and social programs); and security (e.g., firearms and armory control). The security and safety index considered many factors, including disciplinary reports, use-of-force incidents, assaults, deaths, injuries, and escapes. These reports were counted over a 15-month period (from July 1993 through September 1994) for each facility. The program and activity index measured the percentage of inmates who were eligible for a work or program assignment but remained inactive and unassigned. The data used in this review were derived from monthly reports that measured actual numbers of prisoners assigned to the particular program or activity and the percent unassigned. The results of the quality of service assessment stated that “all three facilities were operated at essentially the same level of performance.” No differences were found among the facilities on the security and safety index or on the program and activity index. All estimated variation across the facilities was due to differences in audit scores. The overall performance scores were 98.49 for the private facility and 97.17 and 98.34, respectively, for the two public facilities. When the Washington State Legislative Budget Committee conducted this study, the state had no privately run prisons but was considering the feasibility of such. Therefore, using pertinent information available in other states, this study made several intrastate and interstate comparisons of correctional facilities. For example, the study compared the operational costs of the three Tennessee facilities (mentioned above) as well as three multicustody male facilities in Louisiana. Of the three facilities in Louisiana, two were privately operated (Corrections Corporation of America and Wackenhut, respectively), and the other was state operated. All three Louisiana facilities were in full operation by the beginning of 1991. Each of the Tennessee prisons had a rated capacity for 1,336 inmates, and each of the Louisiana prisons had a rated capacity for 1,474 inmates. The average daily inmate population at each of the Tennessee prisons was slightly over 1,300, compared with a range of over 1,300 to more than 1,400 at the Louisiana prisons. Also, within each state, there was little difference among the prisons’ inmates with respect to demographics such as education, age, offense types, and sentence lengths. Several cost comparisons were made between the private and public facilities. First, the operational costs of the one private and the two public prisons in Tennessee (actual data for July 1993 through June 1994) were compared, as were the operational costs of the two private prisons and the one public prison in Louisiana (estimated data for July 1995 through June 1996). The unadjusted operational costs of the three Tennessee facilities were similar. However, after adjustments to equalize the numbers of inmates, the private facility’s average daily operational costs per inmate ($33.61) were slightly lower than the comparable costs for the two public facilities ($35.82 and $35.28, respectively). For the Louisiana facilities, the average daily operational costs per inmate for the two private prisons were $23.75 and $23.34, respectively, versus $23.55 for the public facility. The Washington study also compared the operational costs of one Washington state prison with the operational costs of one Tennessee state prison and the operational costs of one Louisiana state prison. The three facilities were similar on some characteristics; however, adjustments were made to the costs and number of beds of the Tennessee and Louisiana facilities to further equalize the comparison. The study showed that the average daily operational costs per inmate for the Washington facility ($44.52) were higher than the costs for the Tennessee ($37.07) and the Louisiana ($24.04) facilities. The comparison of the Washington facility with the Tennessee and the Louisiana facilities was problematic. While the facilities were similar in capacity, there were differences in inmate demographics, such as race and offense type. Other factors (e.g., cost-of-living differences) served to complicate further the interstate comparisons. In any event, these interstate cost comparisons involved state-run facilities only and did not consider any private facilities. Further, the Washington study looked at construction costs by comparing the estimated costs for Washington state to construct a planned multicustody public prison for males with a private company that was constructing a similar facility in Florida. In making the interstate comparison, the study noted that it focused on the “major elements contributing to capital costs,” which included amounts and types of facility space, actual construction costs, and ancillary construction costs such as design and administration. Although the Washington study noted that the facilities were comparable in terms of size and inmate mix (e.g., “large multicustody”), it made cost and space programming adjustments to the facilities to further equalize the comparison. For instance, land and site-related costs, taxes, and unique local costs were excluded from the comparison. For the Florida facility, the study made upward cost adjustments to account for differences between the two facilities in labor and material costs, the later completion (about 2 years of construction inflation) of the Washington facility, and state oversight of the construction. For the Washington facility, the study made downward cost adjustments to account for differences between the two facilities, such as budget reductions of 20 percent, and space reductions of 18 percent to account for differences in inmate security levels and other space allocations. In addition, the study made downward cost and space adjustments to reflect the Florida facility’s lower mix of close custody beds. The study indicated that there were other differences in the space programming between the two facilities, for which no adjustments were made. For example, the Washington facility assumed single cells for “Close Housing and Administrative Segregation,” while the Florida facility assumed double cells for those beds. Also, the Washington facility’s minimum-security beds had relatively high per-bed space allocations, reflecting the incorporation of service and program space in the housing calculation. The space allocations for the Florida facility, however, reflected medium-security beds with centralized program and service space. The study showed that the adjusted estimated per-bed cost for the Washington state facility ($60,400 per bed) was approximately double the estimated cost for the Florida facility ($33,900 per bed). The cost difference was explained largely as due to different operating and programming approaches or philosophies between the two states. The Washington study concluded that privatization per se would not result in cost savings to the state. Rather, the report noted that savings could be accomplished through privatization or through changes in the state’s operational policies and practices. For example, savings directly related to privatization would be due primarily to a private company’s flexibility to operate outside state rules and procedures, collective bargaining agreements, and the state’s employee compensation system. Finally, the Washington study comparatively assessed the quality of service at the selected private and public prisons in Tennessee and Louisiana and two multicustody male facilities in Washington. While this portion of the study was not as detailed or as comprehensive as the portion involving costs, the quality assessment included visiting the prisons and reviewing institutional records for several topics, such as escapes, major disturbances, and inmate infractions. The study concluded that the private and public prisons studied within the respective states (Tennessee and Louisiana) generally were similar in quality of service. However, the study noted that Washington’s two state-run facilities had more counselors per inmate than the facilities in the other states. Several factors could affect interstate comparisons of prison costs and/or quality of service. In addition to cost-of-living and other economic differences among the nation’s geographic regions and states, these factors include the (1) extent of prison overcrowding, (2) history of court interventions, (3) status of ACA accreditation of facilities, (4) rate of incarceration (as an indicator of the punitiveness of the corrections system), and (5) rights of correctional employees to organize and bargain collectively. Regarding the studies that we reviewed, the sponsoring states—Texas, New Mexico, California, Tennessee, and Washington—are located in the southern or western regions of the United States, the areas where most privatized facilities are located (see app. IV). Regional differences, such as cost of living, may affect comparisons with other states or regions. For example, the Washington state study adjusted Tennessee’s operational costs upward by 20 percent to account for regional cost-of-living differences between the two states. By definition, prison overcrowding occurs when the number of inmates actually incarcerated exceeds the rated capacity of the correctional facility. As table III.1 shows, the extent of overcrowding (if any) varied among the five states studied—at the time of initial privatization of selected correctional facilities within the respective state. For example, on January 1, 1991, California’s prisons held 94,050 inmates, which was 41,352 inmates (or 78.5 percent) above the total rated capacity (52,698). In comparison, at that time, the average state correctional system was operating at 12.6 percent above rated capacity. Texas was not reporting overcrowding on January 1, 1989. However, most states, including Texas and the other states studied, have been involved, at some point, in litigation challenging various conditions of confinement, such as overcrowding, in their prisons. The following general descriptions are examples of prison litigation that have occurred in the states studied: In Texas, the Ruiz v. Estelle line of decisions includes a 1980 ruling, which found that various conditions (such as overcrowding and inadequate sanitation, recreational facilities, and health care) within the Texas Department of Corrections violated the U.S. Constitution. Thus, the court appointed special masters and monitors to supervise the implementation of and compliance with its decree. In New Mexico, the Duran v. Apodaca line of cases includes a 1980 consent decree that contained mandatory and prohibitive injunctions relating to conditions and practices at the state’s penitentiary. Among other subjects, the consent decree addressed living conditions, medical and mental health care, and inmate discipline. In California, various court decisions in the 1980s addressed segregation procedures, double-celling, and other conditions of confinement at several prisons located in northern areas of the state. In Tennessee, the 1982 Grubbs v. Bradley decision found that certain practices and conditions of confinement at the state’s adult penal institutions were unconstitutional. In Louisiana, in the 1977 Williams v. Edwards decision, the court held that conditions at the state penitentiary at Angola violated the U.S. Constitution and certain state laws. Another factor that could be considered in making interstate comparisons of correctional facilities is the extent of ACA accreditation. Obtaining ACA accreditation signifies that a facility has met minimum standards. ACA officials told us that, at the time of the respective state’s initial privatization efforts, Texas and Louisiana had no ACA-accredited facilities; but New Mexico, California, and Tennessee had “some” accredited facilities. For these latter three states, the ACA officials were unable to specifically quantify the number of accredited facilities that existed when the respective state began its privatization efforts. However, the officials were able to tell us that, as of 1989, the state-run women’s facility in New Mexico was not accredited by ACA. The degree of punitiveness of a corrections system, as reflected, for example, by the system’s incarceration rate, may affect operating and programming approaches, and therefore, expectations of service. States with higher incarceration rates tend to be found in the west and south, although there is variation within regions. On December 31, 1988, Texas and New Mexico had incarceration rates of 240 and 180, respectively, while the national average rate for state institutions was 227. On December 31, 1990, the incarceration rates in California (311) and Louisiana (427) were higher than the national average rate for state institutions (227); Tennessee’s incarceration rate, however, was lower (207). Furthermore, the Washington study found differences in programming (e.g., education and work programs) when comparing Washington with Tennessee and Louisiana. Both opponents and proponents of privatization have suggested that active correctional employees’ unions can affect whether a state decides to privatize corrections; for example, union agreements with the state may be a disincentive to privatization. According to the results of an April 1994 ACA survey of state adult correctional departments, the public employees of such organizations in the five states studied shown in table III.1 had the right to organize. However, the percentage of such correctional employees represented by unions in two of the states was 20 percent or below and one state did not provide data; in three states, the correctional employees could not bargain collectively; and in none of the states did the correctional employees have the right to strike. Under or (over) (54) (41,352) (continued) As table IV.1 shows, as of March 1996, a total of 47 private correctional facilities (secure facilities for adults) were being operated or being planned for operations by private companies in various states. These 47 private correctional facilities are located in 12 states. However, the most use (actual or planned) of privatized correctional facilities is in 3 states—Texas, with 21 facilities; Florida, with 7 facilities; and California, with 5 facilities. Danny R. Burton, Assistant Director Steve D. Boyles, Evaluator Donna B. Svoboda, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed several studies that compared privatized and public correctional facilities in terms of operational costs and quality of service. GAO found that: (1) five studies comparing operational costs or quality of service at private and public correctional facilities in California, Tennessee, Washington, Texas, and New Mexico had been completed since 1991; (2) it could not draw any conclusions about cost savings or quality of service, since the four studies that assessed operational costs indicated little difference or mixed results, and the two studies that addressed quality of life reported either equivocal findings or no differences between private and public facilities; and (3) the studies provide little information that is applicable to various correctional settings, since states may differ widely in terms of correctional philosophy, economic factors, and inmate population characteristics. GAO believes that future comparative studies of public and private correctional facilities should: (1) focus on both operational costs and quality of service; (2) evaluate operational costs at existing comparable, not hypothetical, facilities; (3) employ multiple indicators or data sources to objectively measure quality of service issues; and (4) be based upon data collected over several years. |
USPS’s financial condition deteriorated in fiscal year 2008. According to USPS, this was due largely to declines in the economy—particularly in the financial and housing sectors—that were reflected in a 4.5 percent decline in total mail volumes and flattened revenues despite rate increases. In addition, fuel prices increased costs by over $500 million, and cost-of- living allowances provided to postal employees increased costs by about $560 million. Even after reducing over $2 billion in costs, primarily by cutting more than 50 million work hours, USPS was not able to close the gap between revenues and expenses. Thus, USPS finished fiscal year 2008 with a $2.8 billion loss—the second-largest loss since 1971 (see app. I). Further, USPS productivity decreased 0.5 percent in fiscal year 2008, which was the first decline since fiscal year 1999. According to USPS, productivity declined because its cost-cutting efforts were not sufficient to offset the impact of declining mail volume. USPS debt increased by $3 billion in fiscal year 2008—the annual statutory limit—and reached $7.2 billion in total outstanding debt at the end of the fiscal year, or nearly half of the $15 billion statutory debt limit. At the end of fiscal year 2005, USPS had no outstanding debt. At this pace, USPS would be constrained at the end of fiscal year 2011 by the $15 billion statutory debt limit. As USPS has reported, it experienced the single largest volume drop in its history in fiscal year 2008 when mail volume fell by 9.5 billion pieces (see app. II). First-Class Mail volume (e.g., correspondence, bills, payments, and statements) declined 4.8 percent, while Standard Mail (primarily advertising) declined 4.3 percent. Volume declines accelerated during fiscal year 2008 (see fig. 1). Preliminary results for the first quarter of fiscal year 2009 indicate that the trend of accelerating volume declines is continuing. According to USPS, difficulties faced by the hard-hit financial and housing sectors, which are major mail users, contributed to mail volume declines in fiscal year 2008. Advertising mail was adversely affected, particularly credit card, mortgage, and home equity solicitations. Volume declines also came from catalogue retailers, the printing and publishing business, and the services sector. Mail volume in fiscal year 2008 was also affected by the continuing shift of mail to electronic communication and payment alternatives. The accelerating declines in mail volumes resulted in a similar trend for total USPS revenues. USPS stepped up cost-cutting efforts during fiscal year 2008 but did not cut costs sufficiently to offset the impact of declining mail volumes. USP S has large overhead (institutional) costs that are hard to change in the short term, including providing 6-day delivery and retail services at close to 37,000 post offices and retail facilities across the country. Compensatio and benefits for USPS’s workforce, which was about 663,000 career employees and nearly 102,000 noncareer employees at the end of fisc year 2008, generated close to 80 percent of USPS costs. USPS has collective bargaining agreements with its four largest unions that e 2010 and 2011. These agreements include layoff protections, as well as work rules that constrain USPS’s flexibility. They also include semiannu cost-of-living allowances (COLA) linked to the Consumer Price Index (CPI). In addition, the agreements cover many benefits, such as the employer and employee contributions to health benefits premiums. U the current collective bargaining agreements, USPS’s share of the employee health benefit premiums was 85 percent in fiscal year 20 07 and will decrease by 1 percent each year beginning in fiscal year 2008 or 2009 through 2011 or 2012, depending on the terms of the agreements with the unions. USPS’s share of the premiums in fiscal year 2007 was about 13 percent more than for most other federal agencies. According to USPS officials, USPS’s financial outlook has continued to deteriorate based on preliminary results for the first quarter of fiscal year 2009, as well as updated projections for mail volume and revenue. Preliminary first quarter results indicate that USPS incurred a deficit, as expense reductions did not fully offset large declines in volume and revenue. In response, USPS has cut work hour targets for its field operations for the rest of the fiscal year. However, USPS officials told us these targets could be difficult to achieve, and they expect the net loss for fiscal year 2009 to exceed last year’s net loss. In light of these results and updated projections, USPS officials told us this month that they expect fiscal year 2009 mail volume to decline by 10 billion to 15 billion pieces. USPS officials project revenues to fall below the target in USPS’s original budget and for debt to increase by $3 billion. USPS officials said they expect to have sufficient cash reserves to make mandated year-end payments for retiree health benefits and workers’ compensation, unless the USPS net loss for fiscal year 2009 exceeds $5 billion. Given difficult and uncertain economic conditions, it will be important for USPS to continue providing Congress and stakeholders timely and sufficiently detailed information to understand USPS’s current financial situation and outlook. Various options or actions are available for USPS to remain financially viable in the short and long term. In the short term, USPS has asked Congress to consider its proposal for immediate financial relief. In the long term, aggressive USPS action beyond its current cost-cutting effort urgently needed to reduce costs and improve efficiency, particularly in light of accelerated declines in mail volume and changes in the public’s use of mail. We agree with the Postal Regulatory Commission (PRC) that unfavorable mail volume and revenue trends may imperil USPS’s financial viability and that USPS must dramatically reduce its costs to remain viable. As the PRC has noted, current pressures from declining revenue and volume do not appear to be abating, but rather seem to be increasing. During the economic downturn, there has been accelerated diversion of business and individual mail, and some mailers have left the market entirely. An economic recovery may not bring a corresponding recov mail volume due to continuing social and technological trends that have changed the way that people communicate and use the mail. Specifically: First-Class Mail volume has declined in recent years and is expected to decline for the foreseeable future as businesses, nonprofit organizations, governments, and households continue to mo ve their correspondence and transactions to electronic alternatives, such as Internet bill payment, automatic deduction, and direct deposit. USPS analysis has found that electronic diversion is associated with the growing adoption of broadband technology. As PRC reported, available alternatives to mail eventually result in substitution effects. It is unclear whether Standard Mail will continue to grow with an economic recovery. Standard Mail now faces growing competition from electronic alternatives, such as Internet-based search engine marketing, e-mail offers, and advertisements on Web sites. In addition, Standard Mail is price-sensitive, as was demonstrated when catalog advertising declined in response to the 2007 postal rate increase. Although Standard Mail rate increases are limited by the price cap, future rate increases will likely have some impact on volume. Periodicals (e.g., mailed newspapers and magazines) volume has been declining due to changing reading preferences and these declines are expected to continue. Overall newspaper readership is falling. Also, the Christian Science Monitor and U.S. News and World Report recently announced that they would discontinue their printed editions. Businesses and consumers are becoming more likely to obtain news and information from the Internet, a trend that is particularly evident among young people. Several options could assist USPS through its short-term difficulties, some of which would require congressional action. Although we recognize the need to provide USPS with immediate financial relief, such relief should meet its short-term needs and is no substitute for aggressive USPS action to preserve its long-term viability. Key options include the following: Reduce USPS payments for retiree health benefits for 8 years. USPS has proposed that Congress give it immediate financial relief by reducing its retiree health benefits payments by an estimated $25 billion from 2009 through 2016. Specifically, USPS has proposed that Congress change the statutory obligation to pay retiree health benefits premiums for current retirees from USPS to the Postal Service Retiree Health Benefits Fund (Fund) for the next 8 years. Because the Fund would pay the estimated $25 billion in premium payments over the next 8 years, this would decrease the Fund by approximately $32 billion (including interest charges) as of 2017. With this option, starting in fiscal year 2017, USPS would have a total unfunded retiree health benefits obligation currently estimated at about $75 billion, rather than an estimated $43 billion, that would then need to be amortized in future years. In the long term, the large impact this unfunded obligation would have on the Fund would create the risk that USPS would have difficulty making future payments, particularly considering mail volume trends and the impact of payments on postal rates if mail volume declines continue. USPS’s proposal would also shift responsibility for paying the benefits of postal employees from current rate payers to future rate payers. USPS would continue to make annual payments ranging from $5.4 billion to $5.8 billion from fiscal years 2009 through 2016 (as shown in Table 1) for its obligation for future retiree health benefits, as required by PAEA. Thus, under USPS’s proposal, it would save $2 billion in fiscal year 2009. Reduce USPS payments for retiree health benefits for 2 years. Another option would be for Congress to provide USPS with 2-year relief for retiree health benefits premium payments, totaling about $4.3 billion, which would be consistent with providing immediate financial relief, while having much less impact on the Fund than USPS’s proposal. Specifically, Congress could revise USPS’s statutory obligation so that it would not pay for current retiree health benefits for fiscal years 2009 and 2010. USPS has provided information related to its financial situation for fiscal years 2009 and 2010 which projected that its financial condition would improve beginning in 2010. Therefore, we believe that the option to provide 2-year relief totaling $4.3 billion would be preferable to USPS’s proposal. Under this short- term option Congress could revisit USPS’s financial condition to determine whether further relief is needed and also review what actions USPS has taken to assure its long-term financial viability. Work with unions to modify work rules. One option that would not require congressional action is similar to actions taken by other financially stressed entities, whereby USPS and its unions could agree on ways to achieve additional short-term savings, such as by modifying work rules to facilitate reducing work hours. For example, USPS and the National Association of Letter Carriers recently agreed on a new procedure to expedite the evaluation and adjustment of city delivery carrier routes. According to USPS officials, this new process is aimed at enhancing USPS’s ability to respond to declining mail volumes and is expected to make a key contribution to the budgeted savings of $1.3 billion in city delivery costs in fiscal years 2009 and 2010. Other options are based on provisions in the statute and could include 1) seeking regulatory approval for an exigent rate increase and 2) increasing USPS’s annual borrowing limit. USPS could request PRC approval for an exigent rate increase that would increase rates for market-dominant classes of mail above the statutory price cap. Mailers have voiced strong concern about the potential impact of an exigent rate increase on their businesses. In our view, this option should be a last resort. Such an increase could be self-defeating for USPS in both the short and long term because it could increase incentives for mailers to further reduce their use of the mail. Congress could also temporarily expand the statutory $3 billion annual limit on increases in USPS debt, which would provide USPS with access to funding if it has difficulty making mandated year-end payments. Raising USPS’s annual debt limit could address a cash shortage and would be preferable to an exigent rate increase. However, it is unclear when USPS would repay any added debt, which would move USPS closer to the $15 billion statutory debt limit. In our view, this option should be regarded only as an emergency stopgap measure. Action is urgently needed to streamline USPS costs in two areas where it has been particularly difficult—the compensation and benefits area, which generates close to 80 percent of its costs, and USPS’s mail processing and retail networks. As USPS’s mail volumes decline, it does not have sufficient revenue to cover the growing costs of providing service to new residences and businesses, while also maintaining its large network of processing and retail facilities. We have reported for many years that USPS needs to rightsize its workforce and realign its network of mail processing and retail facilities. USPS has made some progress, particularly by reducing its workforce by more than 100,000 employees with no layoffs and by closing some smaller mail processing facilities. Yet, more will need to be done. USPS has several options for realigning its mail processing operations to eliminate excess capacity and costs, but has taken only limited action. In 2005, we reported that according to USPS officials, declining mail volume, worksharing, and the evolution of mail processing operations from manual to automated equipment led to excess capacity that has impeded efficiency gains. While USPS has terminated operations at 54 Airport Mail Centers in fiscal years 2006 through 2008, it has closed only one of over 400 major mail processing facilities as a result of consolidating its mail processing operations. Another realignment option USPS is considering is outsourcing operations in its network of 21 bulk mail processing centers. Another option we reported on would be for USPS to close unnecessary retail facilities, and by reducing the number of facilities, USPS could lower the costs of maintaining its network of facilities. USPS’s network of retail facilities has been largely static despite population shifts and changes in mailing behavior. In considering options to provide retail services at a lower cost, it is important to note that large retail facilities—generally located in large urban areas—generate much larger costs for the retail network than the smallest rural facilities and may therefore potentially generate more cost savings. Closing postal facilities is often controversial but is necessary to streamline costs. Congress encouraged USPS to expeditiously move forward in its streamlining efforts in PAEA. We recommended that USPS enhance transparency and strengthen accountability of its realignment efforts to assure stakeholders that realignment would be implemented fairly and achieve the desired results. USPS has taken steps to address our recommendations and thus should be positioned to take action. Other long-term options for reducing costs include more fundamental changes that would have public policy implications for Congress to consider—such as potential changes in USPS’s universal service from 6 to 5 delivery days per week as discussed in a recent PRC study, and potential changes to USPS’s business model, which we will be discussing in a PAEA- required report that will be issued by December 2011. These studies will provide Congress with information about how to address challenges for USPS to meet the changing needs of mailers and the public. We asked USPS to comment on a draft of our testimony. USPS generally agreed with the accuracy of our statement and provided technical corrections and some additional perspective, which we incorporated where appropriate. USPS reiterated its position regarding the funding of retiree health benefits and the difficulties related to its cost-cutting efforts. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or the Members of the Subcommittee may have. For further information regarding this statement, please contact Phillip Herr at (202) 512-2834 or [email protected]. Individuals who made key contributions to this statement include Shirley Abel, Teresa Anderson, Joshua Bartzen, Heather Frevert, David Hooper, Kenneth John, Emily Larson, Susan Ragland, and Crystal Wesco. (Loss) Total Revenues Total Expenses Outstanding debt $(175) (13) (439) (989) (1,176) (687) (380) (306) (588) (251) (223) (597) (874) (1,469) (536) (1,765) (914) (199) (1,680) (676) 1,800 (Loss) Total Revenues Total Expenses Outstanding debt (5,142) (2,806) (millions) Standard Mail volume: percent change (millions) Total international volume (millions) (millions) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | When Congress passed the Postal Accountability and Enhancement Act in December 2006, the U.S. Postal Service (USPS) had just completed fiscal year 2006 with its largest mail volume ever--213 billion pieces of mail and a net income of $900 million. Two years later, USPS's mail volume dropped almost 5 percent--the largest single-year decline. The Postmaster General testified last March before this subcommittee that USPS was facing a potential net loss of over $1 billion for fiscal year 2008. He noted that USPS anticipated continued deterioration due to the economic slowdown, as the financial, credit, and housing sectors are among its key business drivers. He also said that the shifts in transactions and messages from mail to electronic communications and from advertising mail to lower-cost electronic media have affected the USPS's financial situation. This testimony focuses on (1) USPS's financial condition and outlook and (2) options and actions for USPS to remain financially viable in the short and long term. It is based on GAO's past work and updated postal financial information. We asked USPS for comments on our statement. USPS generally agreed with the accuracy of our statement and provided technical corrections and some additional perspective, which we incorporated where appropriate. USPS has reported that the declining economy accelerated declines in mail volume in fiscal year 2008 and flattened revenues despite postal rate increases. In fiscal year 2008, mail volume fell by 9.5 billion pieces, fuel prices increased costs by over $500 million, and cost-of-living allowances for postal employees increased costs by $560 million. Cutting costs by $2 billion--primarily by cutting over 50 million work hours--did not close the gap between revenues and expenses. Thus, USPS recorded a loss of $2.8 billion for fiscal year 2008. Its debt increased by $3 billion by the end of the year to $7.2 billion. USPS's outlook for fiscal year 2009 has become more pessimistic. USPS projects a volume decline of 10 billion to 15 billion pieces, another loss, and $3 billion more in debt. At this pace, USPS could reach its $15 billion statutory debt limit by fiscal year 2011. In the short term, several options could assist USPS through its difficulties, some of which would require congressional action. USPS has proposed that Congress give it immediate financial relief totaling about $25 billion over the next 8 years by changing the funding of its retiree health benefits. Although GAO recognizes the need to provide USPS with immediate financial relief, such relief is no substitute for aggressive USPS action to preserve its long-term viability. USPS projects an improvement in its financial condition in fiscal year 2010. Therefore, GAO believes it would be preferable to provide 2-year relief totaling $4.3 billion. This would have less impact on the retiree health benefits fund, and then Congress could revisit USPS's financial condition to determine whether additional relief is needed. In the long term, USPS action beyond its current cost-cutting efforts is urgently needed to reduce costs and improve efficiency. GAO agrees with the Postal Regulatory Commission that unfavorable mail volume and revenue trends may imperil USPS's financial viability and that USPS must dramatically reduce its costs to remain viable. Two areas for further action to reduce costs include compensation and benefits, which is close to 80 percent of its costs, and mail processing and retail networks. GAO previously reported that excess capacity in USPS's mail processing infrastructure has impeded efficiency gains. USPS has considered several options to realign its facility network, such as outsourcing operations in some mail processing facilities, but has taken only limited action. Another option would be for USPS to close unnecessary retail facilities and thereby reduce its large maintenance backlog. While it has been difficult for USPS to take action in these areas, Congress encouraged USPS to expeditiously move forward in its streamlining efforts in the postal reform act of 2006. GAO recommended that USPS enhance transparency and strengthen accountability of its realignment efforts to assure stakeholders that realignment would be implemented fairly and achieve the desired results, and it has made improvements in this area. Accelerated volume declines and changes in the public's use of mail indicate that USPS needs to move beyond incremental efforts and take aggressive action to streamline its workforce and network costs to assure its long-term viability. |
According to the Trade Promotion Coordinating Committee (TPCC), one of the greatest obstacles to increased U.S. exports faced by SMEs is the lack of sufficient working capital. Working capital is used to finance the manufacture or purchase of goods and services. Eximbank and SBA have programs designed to increase the availability of export working capital to SMEs from the private sector by encouraging greater lender participation in export financing. These programs provide loan repayment guarantees that reduce the risk associated with such loans. Some states also have programs to assist SMEs in obtaining working capital. Eximbank facilitates export financing through its Working Capital Guarantee Program, which is authorized by the Export Trading Company Act of 1982 (P.L. 97-290, Sec. 206, Oct. 8, 1982). During fiscal year 1995, Eximbank guaranteed almost $302 million in export working capital loans, which represented about 3 percent of Eximbank’s total dollar authorizations for the year. Over 97 percent of the exporters assisted by Eximbank through the program during fiscal year 1995 were self-certified as small businesses, as defined by SBA regulations (13 C.F.R. Part 121). SBA developed its Export Working Capital Program (formerly known as the Export Revolving Line of Credit) in response to a requirement in the Small Business Export Expansion Act of 1980 (P.L. 96-481, Sec. 112, Oct. 21, 1980). SBA’s program falls within the statutory authority of the agency’s regular business loan guarantee program, known as the 7(a) program.Under the 7(a) program, SBA guarantees private lender loans to small businesses that have been unable to obtain financing. During fiscal year 1995, SBA guaranteed about $69 million in export working capital loans, which represented less than 1 percent of SBA’s total 7(a) program. All exporters assisted through SBA must qualify under the agency’s definition of a small business. During fiscal year 1995, over 90 percent of the businesses assisted through SBA’s working capital program had less than 50 employees. In its 1993 report to Congress, TPCC made a series of recommendations to increase the effectiveness of U.S. export financing programs. One recommendation called for the establishment of one-stop shops, that is, the U.S. Export Assistance Centers (USEAC), as a single point of contact for all federal export promotion and finance programs. Another recommendation called for federal agencies to encourage qualified state or local export finance entities to enter into cofinancing arrangements in which risk is shared. A third recommendation called for streamlining and harmonizing key features of Eximbank and SBA’s working capital guarantee programs to make them more customer-focused and take advantage of the agencies’ comparative strengths. According to TPCC, harmonization was to give SMEs access to working capital through a broader nationwide network of lenders on a more consistent, efficient, and effective basis. Key features to be harmonized included developing uniform applications, accompanying documentation, and underwriting standards. Harmonization was also to include a market segmentation plan. Eximbank and SBA emphasize different delivery approaches for facilitating their programs. Eximbank relies on its U.S. Division and network of delegated authority lenders, whereas SBA relies primarily on staff with lending authority it has assigned to the USEACs and on its network of district offices. Eight states also have export finance programs that provide working capital guarantees. Levels of financing and contractual arrangements for these guarantees vary considerably among the states. Eximbank’s U.S. Division, staffed with six loan officers and one vice president, has primary responsibility for administering the Working Capital Guarantee Program as well as some marketing responsibility. This division processed all the agency’s export working capital guarantees until the beginning of fiscal year 1995. Over the past couple of years, the division has expanded its outreach to SMEs through its Delegated Authority Program. Under this program, a private lender and Eximbank enter into an agreement that allows the lender to approve Eximbank guaranteed loans to exporters without first having to submit individual applications to Eximbank for approval. During fiscal year 1996, delegated authority lenders approved 192 loans, 69 percent of the agency’s export working capital guarantee transactions. These transactions represented 55 percent of the $413 million in loans guaranteed under Eximbank’s program. (See app. I for a map showing the locations of Eximbank’s U.S. Division and delegated authority lenders.) Although the U.S. Division and delegated authority lenders are central to Eximbank’s Working Capital Guarantee Program, the agency has other resources and arrangements for marketing and supporting the program. These include the following: Eximbank’s Business Development Division promotes and markets the agency’s small business programs, including the Working Capital Guarantee Program. This group includes Washington-based staff and five regional offices, four of which are collocated within USEACs. According to the Eximbank official responsible for regional office operations, the agency’s limited regional staff of 21 employees is expected to focus primarily on increasing the participation of banks, brokers, and other intermediaries in these programs. Eximbank has established partnerships with state and local government offices and private sector organizations under its City/State Program. These partners are to act as liaisons with their export communities, market Eximbank programs, and submit applications on behalf of small businesses. During fiscal year 1995, however, only 8 of Eximbank’s 31 partners reported working capital activity under the program. Effective in April 1996, Eximbank initiated a pilot program to strengthen its program by paying its local partners a packaging fee for applications submitted directly to the agency and a finder’s fee for referrals to delegated authority lenders that result in working capital loans. In September 1996, Eximbank initiated another pilot program to delegate authority for approving guarantees to six state partners. Eximbank has an agreement with the Private Export Funding Corporation, a private consortium of commercial banks and other users of Eximbank, in which the company (1) acts as the lender of last resort for exporters that obtain a preliminary working capital commitment from Eximbank but are unable to obtain financing from commercial sources and (2) purchases Working Capital Guarantee Program loans made by small and regional banks that require help in supporting small business exporters. Under the Working Capital Guarantee Program, Eximbank guaranteed 179 loans valued at almost $302 million during fiscal year 1995. As of June 1996, the default rate for exporters whose loans were guaranteed during fiscal year 1995 was 2.2 percent (4 defaults out of 179). The agency estimated that the cost associated with administering the program during fiscal year 1995 was about $912,000. This estimate included the costs for compensation, benefits, and overhead attributable to the U.S. Division. Although SBA’s Office of International Trade is responsible for overseeing its Export Working Capital Program, the agency relies primarily on USEACs and their district office network to implement the program. SBA has staffed the 15 USEACs with 20 international trade and finance specialists. These specialists help SBA’s 69 district offices reach their Export Working Capital Program goals by marketing and promoting the program and working directly with the exporting and lender communities to structure loans and package applications for loan guarantees. Applications are sent to 1 of 25 district offices designated as export working capital processing centers, where they are reviewed and approved or rejected. An SBA official estimated that the specialists’ spend about 85 percent of their time on the program and the remaining 15 percent on other trade-related activities. (See app. I for a map showing the locations of the USEACs and SBA district offices.) Even though SBA works primarily through USEACs and district offices, it has other resources and arrangements that help market and support the program. These include the following: SBA has coguarantee agreements with California, Kansas, and Florida. Under these agreements, SBA and the states guarantee a portion of the export working capital loan and share, on a proportional basis, any resulting losses and recoveries. California has been by far the most active state, with 25 export working capital loans valued at $8.8 million. SBA has agreements with at least 26 local private sector entities to encourage them to act as packaging intermediaries for its Export Working Capital Program. SBA uses staff from its small business development centers, about 30 of which have established separate international trade centers, to help market its financial products, including export working capital guarantees. SBA’s Preferred Lender Program, which is part of its Export Working Capital Program, is similar to Eximbank’s Delegated Authority Program. However, according to an SBA official, only 1 of about 12 preferred lenders had provided export financing under the program, as of August 1996. Under the Export Working Capital Program, SBA guaranteed 190 loans valued at about $69 million during fiscal year 1995. As of August 1996, the default rate for exporters whose loans were guaranteed during fiscal year 1995 was 1.6 percent (3 defaults out of 190). The agency estimated that costs associated with administering its program during fiscal year 1995 totaled $461,667. This estimate includes an allocated portion of SBA’s costs related to staffing and supporting USEACs. We surveyed 24 states and 1 U.S. territory initially identified as having export finance programs and found 8 states that provide export working capital guarantees for SMEs. The eight state programs were designed specifically to service small businesses. As shown in table 1, these state programs varied greatly in their level of staff resources, available funding for guarantees, and program activity. In addition, our survey identified programs in nine states and the one U.S. territory that provided export finance assistance to small companies but did not offer export-related working capital guarantees. Services provided by these programs included export finance counseling; loan packaging for Eximbank; and referrals to Eximbank, SBA, or lenders. The remaining seven states surveyed did not have export finance programs. (See app. II for additional information on state-level export finance programs.) Eximbank and SBA have made progress harmonizing their export working capital programs. The agencies’ efforts to harmonize and, in other ways, improve their programs appear to have increased the level of loans guaranteed and the extent of exporter and lender participation. However, the programs continue to have some differences. Furthermore, progress toward harmonization was affected by a reduction in SBA’s guarantee rate effective during fiscal year 1996. Despite remaining program differences and the temporary reduction in SBA’s guarantee rate, both SBA and Eximbank have been able to continue bringing new exporters and, to a lesser degree, new lenders into their programs. In response to TPCC’s recommendations, Eximbank and SBA began harmonizing their export working capital programs in October 1994 to simplify the loan process and make the programs more consistent for exporters and lenders. They standardized such key features as the application forms used by lenders or exporters, application fees, and guarantee coverage. To standardize the guarantee coverage, Eximbank reduced its coverage from 100 percent of principal and interest to 90 percent, and SBA raised its 85-percent guarantee to 90 percent. Both agencies also streamlined their procedures for processing loan guarantees. Additionally, they agreed to a market segmentation plan that (1) assigned SBA primary responsibility for assisting small businesses whose export working capital needs do not exceed SBA’s $750,000 exposure limit and (2) made Eximbank responsible for assisting exporters who do not fall within SBA’s small business standards or whose transactions exceed SBA’s limit. The effects of harmonization-related changes are difficult to measure because many occurred when Eximbank and SBA were making other changes aimed at improving export finance assistance for small businesses. For example, in fiscal year 1995, SBA began to set export working capital goals for each of its 69 district offices, and it developed new coguarantee agreements with a few states. SBA also provided basic export finance training to almost 300 of its staff and resource partners (e.g., small business development centers) and more in-depth training on transaction lending to its trade finance specialists. Likewise, during the same period, Eximbank enhanced its Delegated Authority Program by increasing the limits on the aggregate amounts participating lenders can provide to single borrowers annually and allowing lenders to retain all or part of a loan fee, depending on the amount of the loan. Together, Eximbank and SBA officials also conducted export finance seminars in 13 cities that were attended by about 1,300 bankers. In addition, the agencies worked with the Department of Commerce to expand the USEAC network. These program changes, including those related to the agencies’ harmonization efforts, appear to have helped expand the use of the program, improve SME access to working capital, and increase the number of lenders participating in export financing. During fiscal year 1995, the number of export working capital loans guaranteed by SBA increased 167 percent, from 71 loans totaling $24 million in fiscal year 1994 to 190 loans totaling $69 million in fiscal year 1995. The number of export working capital loans guaranteed by Eximbank increased from 116 loans valued at about $152 million to 179 loans valued at almost $302 million, a 54-percent increase. Our analysis of agency data showed that the two programs helped an increased number of new exporters during the period of harmonization and key program improvements. SBA’s Export Working Capital Program assisted 69 exporters in fiscal year 1994 and 160 new exporters in fiscal year 1995. Eximbank’s Working Capital Guarantee Program assisted 110 exporters in fiscal year 1994 and 133 new exporters in fiscal year 1995. Lender participation in both of these programs also increased from 1994 to 1995. In fiscal year 1994, 56 lenders provided financing under SBA’s Export Working Capital Program. In fiscal year 1995, 107 new lenders participated in the program. For Eximbank, 79 lenders provided financing under its Working Capital Guarantee Program in fiscal year 1994; 50 new lenders participated in the program in fiscal year 1995. Although harmonization and other program improvements have produced some positive results, Eximbank’s and SBA’s programs are still not fully harmonized, as recommended by TPCC. TPCC suggested standardizing the underwriting standards; however, there is a difference in the two agencies’ credit qualification requirements. Eximbank requires borrowers to have a positive net worth; SBA does not. An Eximbank official stated that the agency has the flexibility to waive this requirement for otherwise creditworthy borrowers but could recall only a few instances in which this was done. ProAction Agency, the consultant commissioned by Eximbank and SBA to evaluate harmonization efforts, identified another key difference between Eximbank’s and SBA’s programs; Eximbank and SBA fees for processing guarantees are not standardized. For loans with a term of greater than 6 but not more than 12 months, Eximbank charged 1.5 percent of the loan amount, and for loans of 6 months or less, it charged 0.75 percent of the loan amount. SBA charged a fee of 0.25 percent of the guaranteed amount for loans with a term of 12 months or less. ProAction Agency also identified some remaining differences in Eximbank’s and SBA’s efforts to harmonize program documentation and operational procedures. ProAction Agency concluded that because of the vast differences between the two agencies’ programs, harmonization could not have been reasonably completed within the recommended 12-month time frame. It further noted, however, that the lack of program standardization created a larger burden on lenders and exporters in the form of increased paperwork, high turnaround time, and general confusion regarding expectations. During fiscal year 1996, no other features of the two export working capital programs were standardized. SBA commented that, during fiscal year 1997, it and Eximbank have been working together to identify ways to further harmonize the closing documents for their export working capital loans. Furthermore, progress toward harmonization of the two agencies’ programs was interrupted during fiscal year 1996 when the guarantee coverage of SBA’s 7(a) program was reduced in accordance with the Small Business Lending Enhancement Act of 1995 (P.L. 104-36, Sec. 2, Oct. 12, 1995). While Eximbank’s guarantee coverage remained at 90 percent, SBA’s coverage was reduced to 75 percent for loans above $100,000 and to 80 percent for loans below that level. In a report to Congress, SBA characterized this change as a severe setback to harmonization that caused confusion among the lending and small business exporting communities. SBA officials believe that this setback caused the agency to lose the momentum that allowed it to almost triple (from 71 to 190) the number of loans it guaranteed in fiscal year 1995. An Eximbank official emphasized that a common guarantee rate was an important element of harmonization and predicted that SBA’s reduced rate would negatively affect small business exporters who need the localized support and assistance of SBA and its lenders. Notwithstanding the reduction in SBA’s guarantee rate, SBA increased the number of export working capital loans guaranteed by 38 percent for fiscal year 1996. Eximbank increased the number of loans guaranteed by 56 percent. The value of loans guaranteed under each of the agencies’ programs likewise increased by almost 38 percent. Figure 1 shows the number of loans guaranteed by both agencies between fiscal years 1991 and 1996. In addition to increasing the number and value of loans guaranteed during fiscal year 1996, Eximbank and SBA have continued to enlist new exporters at generally the same rate as in the prior year. The number of new lenders funding loans through the export working capital programs, however, declined for both agencies in 1996, as shown in table 2. Effective October 1, 1996, SBA was provided authority to restore its guarantee coverage to 90 percent for its Export Working Capital Program, pursuant to the Omnibus Consolidated Appropriations Bill for fiscal year 1997 (P.L. 104-208, Sept. 30, 1996). To facilitate small business export finance, Eximbank and SBA have established more cooperative agreements with both the private and public sectors. Delegating authority to private sector lenders and devolving certain program responsibilities to state export finance organizations are examples of cooperative agreements. Expanding the use of these approaches could further leverage federal resources and expand federal outreach to SMEs, but it would also shift more responsibility for the guarantee of funds from the federal government to the private sector and the states. Nevertheless, Eximbank and SBA remain responsible for ensuring that the programs are well managed, funds are properly spent, and program objectives are met. Eximbank’s Delegated Authority Program exemplifies one cooperative approach to increasing SME access to export financing. Under the program, exporters can have working capital guarantees processed and approved by a network of 69 delegated authority lenders located in 25 states plus the District of Columbia, rather than having to go through Eximbank’s Washington, D.C., office. This program has also allowed Eximbank to leverage its resources and increase lender participation. The Delegated Authority Program enabled the U.S. Division’s staff to handle an increasing number of working capital guarantees while maintaining the same level of staffing. For example, in 1994, no loans were processed under the Delegated Authority Program, but in fiscal year 1995, Eximbank’s delegated authority lenders processed 99 loans valued at $115 million, without an increase in the U.S. Division’s staff. This activity represented 55 percent of Eximbank’s working capital guarantee program. During fiscal year 1996, the delegated authority lenders processed 69 percent of the agency’s working capital guarantees. This activity represented 192 loans valued at about $227 million. U.S. Division officials estimated that, even though the Delegated Authority Program allowed them to leverage their resources, about 20 to 30 percent of staff time was spent administering and monitoring the program. The Delegated Authority Program also appears to have increased the level of lender participation. In fiscal year 1995, 29 active delegated authority lenders funded, on average, over twice as many loans using delegated authority than they did the prior year, when the program was dormant. According to Eximbank officials, lenders’ ability to provide guarantees without obtaining prior approval coupled with fiscal year 1995 program enhancements contributed to the increased level of lender participation in the program. These enhancements included, for example, lenders’ ability to retain all or part of a loan fee. Over 74 percent of the 56 respondents to our Delegated Authority Program survey confirmed that quicker processing time attributable to the lenders’ ability to approve loan guarantees, fee incentives, and the 90-percent guarantee coverage were the most important factors for remaining enrolled in the program. Because more than two-thirds of Eximbank’s export working capital loans are handled through the Delegated Authority Program, monitoring lenders’ compliance with program requirements and managing the associated level of risks of these loan guarantees have become increasingly important. Eximbank developed a new monitoring system that requires inspections of all delegated authority lenders that have made at least one transaction under the program. According to Eximbank officials, these inspections assess lenders’ compliance with various program requirements, including repayment terms, reviews of creditworthiness, and maintenance of loan transaction documentation. If Eximbank identifies compliance problems, it can place the lender on probation or retract the lender’s delegated authority status. Eximbank officials said that possible loss of eligibility is one of the most effective measures for ensuring a lender’s compliance. Another measure is the 10-percent risk assumed by the bank in the event of loan defaults. As of June 1996, the default rate for exporters whose loans were guaranteed through the Delegated Authority Program in fiscal year 1995 was 1 default out of 99 loans, and the default rate for loans guaranteed through Eximbank headquarters was 3 defaults out of 80 loans. In 1993, TPCC recognized the merits of expanding the use of cooperative arrangements with states when it endorsed cofinancing agreements as part of an overall government strategy to facilitate export promotion and financing. Under these agreements, federal programs can expand their outreach to SMEs by taking advantage of the states’ proximity to target firms and their knowledge of local businesses. Also, all 17 states and the 1 U.S. territory with export finance programs that were surveyed indicated their programs were designed to serve smaller companies. Likewise, states benefit from cooperative agreements by gaining access to federal guarantee funds that complement their own funds. However, key limitations to expanding such agreements are (1) legal prohibitions at the state level and (2) varying levels of state commitment to export finance assistance. Moreover, these types of arrangements require provisions or mechanisms to ensure that federal guarantee funds are appropriately committed. Legal prohibitions sometimes prevent states from offering state-backed guarantees. Constitutions for six states prohibit them from providing export finance assistance, according to a report by the National Association of State Development Agencies. States may also vary in their level of commitment to export financing, depending on the policy priorities of the states’ current administration. A state program administrator said maintaining a consistent level of commitment to a particular program can be difficult because states have limited funds and a large number of competing demands. In some states, the level of funding available to support export financing has changed from year to year. In Maryland, for example, the leveraged guarantee funding was reduced from $60 million in fiscal year 1995 to $50 million in fiscal year 1996 because of a shift in the state’s program priorities. In Texas, the leveraged funding was reduced from $2 million in fiscal year 1995 to no funding in fiscal year 1996. Eximbank established partnerships with state and local government offices and private organizations to help market its small business financial products. In September 1996, it implemented a pilot program delegating authority for approving working capital guarantees to six of the agency’s state partners. Eximbank requires participating states to have an active export guarantee program with an average loan-loss track record of 5 percent or less, an independent credit approval process, and at least one person within the office or organization who has completed Eximbank’s City/State Program training requirements. Under pilot program guidelines, the maximum Eximbank guarantee will not exceed the legislative limit of the respective state partner, and Eximbank’s maximum aggregate liability on principal will be $10 million per state partner. The state, Eximbank, and the lender will be partners in each guarantee, sharing all risks or losses or recovered amounts on a proportional basis. Matching fund requirements and risk-sharing provisions are intended to promote accountability, and Eximbank officials believe they encourage state partners to ensure that federal funds are appropriately committed. Transactions under this program are also expected to conform with Delegated Authority Program guidelines and documentation requirements. As with delegated authority lenders, state partners are to be subject to periodic field inspections by Eximbank staff. SBA has developed separate coguarantee arrangements with three states, as discussed earlier. It relies on risk-sharing program features, documentation requirements, and its final approval authority to ensure that states exercise due diligence and comply with coguarantee arrangements. Although SBA does not have formal eligibility requirements for developing coguarantee arrangements with states, agency officials emphasized that they tailor the agreements to the individual state programs. For example, SBA’s agreement with California provides for a 50/50 matching guarantee for 90 percent of the principal of working capital loans. Guarantees under this agreement are not to exceed $1.5 million, or up to $750,000 per agency per guarantee, the maximum amount that California can guarantee. According to the director of the California program, the state conducts its loan analysis and completes its forms as usual and then sends the loan guarantee package to SBA trade finance specialists located at the USEAC in Long Beach for approval. SBA officials explained that they have not developed a separate monitoring system for overseeing its coguarantee agreements with the states because the agency still retains final review and approval authority for SBA’s portion of the loan guarantee. In 1993, TPCC recommended that SBA’s Export Working Capital Program should be merged into Eximbank’s program if the agency’s harmonization efforts were unsatisfactory. Although Eximbank and SBA have made progress in harmonizing their programs, a number of factors would need to be considered before any such transfer occurred. For example, Eximbank’s program may not serve some exporters currently served by SBA since its delegated authority lenders may not handle the smaller export transactions SBA does. Other exporters may not have easy access to Eximbank’s U.S. Division and its network of delegated authority lenders, which are located in only 25 states plus the District of Columbia and tend to be clustered around large metropolitan areas. Likewise, some SBA lenders currently served by SBA may lose the benefit of being introduced to the program and becoming involved in export financing, since Eximbank and SBA tend to encourage greater lender participation in differing ways and reach out to different types of lenders. Finally, consolidating the programs may result in minimal cost savings, according to SBA’s cost estimates for administering its Export Working Capital Program. We sought to determine whether Eximbank’s lenders would be willing to provide the same level of support for smaller export transactions that SBA lenders provide. In accordance with the market segmentation plan under harmonization, SBA was to handle applications for loans that were less than or equal to $833,333 and Eximbank was responsible for handling working capital loans over $833,333. However, Eximbank’s delegated authority lenders were not covered by the market segmentation plan and were allowed to handle smaller export working capital loans. Therefore, we focused our analysis on the delegated authority lenders. During fiscal years 1995 and 1996, only 24 percent of the loans guaranteed through Eximbank’s Delegated Authority Program were valued at less than $500,000 and about 70 percent of the export working capital loans guaranteed by SBA were valued at less than $500,000, as shown in figure 2. Dollar value (in thousands) In our survey of delegated authority lenders, over 80 percent (46 of 56) of the respondents indicated that they would be willing to provide export working capital loans for less than $833,333 to existing customers, and 62 percent (35 of 56) indicated they would be willing to provide such loans for new customers. However, 66 percent of the respondents indicated that they would probably not provide a working capital loan under a certain threshold, which for these lenders was a median threshold of $250,000.During fiscal years 1995 and 1996, 14 percent of the export working capital loans guaranteed through delegated authority lenders were less than $250,000 compared with about 40 percent of the loans guaranteed by SBA. More than half of the delegated authority lenders indicating they had a threshold stated that processing loans below that amount was too costly and time-consuming for them to make a profit. In response to follow-up contacts, six of seven delegated authority lenders who were generally willing to provide smaller working capital loans agreed such loans were largely unprofitable for their institutions. They were willing to provide small working capital loans chiefly to develop new business opportunities with exporters or maintain their existing customer base. Over 60 percent of the respondents indicated that certain incentives would be effective in encouraging them to provide working capital loans under $833,333. These incentives included changes to Eximbank’s program, such as allowing lenders to retain a greater portion of the facility fee and further relaxing collateral requirements. Other changes were outside the scope of the Delegated Authority Program, including allowing lenders to receive credit for promoting small business export finance through the Community Reinvestment Act and relaxing loan loss reserve requirements. About 32 percent of the respondents suggested simplifying the paperwork and loan processing requirements of Eximbank’s program or increasing the guarantee to 100 percent for smaller loans. If SBA’s Export Working Capital Program were transferred to Eximbank, SMEs in some states might not have convenient access to the current lenders who participate in Eximbank’s Delegated Authority Program. Eximbank’s U.S. Division is located in Washington, D.C., and its 69 delegated authority lenders located in 25 states plus the District of Columbia tend to be concentrated in large metropolitan areas. In contrast, USEACs and SBA’s district offices cover all 50 states and Puerto Rico. Although Eximbank’s delegated authority lenders are in fewer states than USEACs and SBA’s district offices, Eximbank may be able to reach these businesses in other ways, such as its City/State Program. Eximbank’s pilot programs, which began in September 1996, are intended to increase the activity level of these local partners. Some SMEs currently served by SBA may not have access to Eximbank’s Working Capital Guarantee Program because of statutory restrictions on the types of loans Eximbank can guarantee and a difference in Eximbank’s and SBA’s credit qualification requirements. For example, Eximbank is prohibited from financing defense articles and services and is restricted in the amount it can guarantee for products that have less than 50 percent U.S. content. Eximbank’s seemingly stricter credit standards may also affect small exporters served by SBA. Eximbank and SBA use different methods to increase lender participation and focus on different types of lenders. If SBA’s Export Working Capital Program were transferred to Eximbank, outreach to some lenders that are currently part of SBA’s network of 7(a) lenders may be adversely affected as a result of these differences. SBA focuses on attracting new banks to its export program from its pool of domestic 7(a) program lenders, whereas Eximbank focuses more on increasing the level of loans funded by its existing lenders. Eximbank also focuses on attracting new lenders to its program but not to the same extent as SBA. During fiscal year 1995 and the first 10 months of fiscal year 1996, SBA attracted 180 new lenders to its Export Working Capital Program, whereas Eximbank attracted 80 new lenders. Although Eximbank had fewer new lenders, it increased the number of loans guaranteed by increasing participation in its Delegated Authority Program. The lenders in SBA’s program tend to have different profiles than those in Eximbank’s program. SBA officials said they generally work with small community banks without international divisions and provide one-on-one assistance with processing export working capital loans. They also explained that the agency works with larger banks’ small business or credit departments, which typically lack experience in export financing. Although some of these larger banks may have international divisions, these divisions are generally not inclined to handle the less profitable smaller transactions. Eximbank tends to work with large banks, many with international departments that can assume delegated authority. In addition, Eximbank and SBA tend to work with banks of different asset sizes. For example, over 70 percent of the banks Eximbank works with have assets greater than $1 billion. On the other hand, almost 53 percent of SBA’s Export Working Capital Program lenders have less than $1 billion in assets, with 16 percent having assets less than $100 million. (See app. III for summary data on the assets of SBA’s Export Working Capital Program lenders and Eximbank’s delegated authority lenders.) Some SBA lenders were aware of Eximbank’s Working Capital Guarantee Program but did not use the program for a variety of reasons. In our survey of the more active SBA lenders, 25 of 28 respondents were aware of Eximbank’s program, but less than half used it. Five respondents did not use Eximbank’s program because they were satisfied with SBA’s program, three respondents believed Eximbank’s program was intended for larger export transactions, and two said the program was too bureaucratic. The potential savings associated with transferring the program from SBA to Eximbank may be modest, given the relatively low estimated costs of administering SBA’s Export Working Capital Program. Furthermore, Eximbank might incur increased costs from hiring additional loan officers to handle the new workload. On the other hand, some of these costs could be mitigated by approving more working capital loan guarantees through the Delegated Authority Program or devolving authority to approve guarantees to more states. SBA estimated it cost $460,000 to administer its Export Working Capital Program in fiscal year 1995. Under the program, SBA guaranteed 190 export working capital loans, which resulted in a potential liability of about $57 million. SBA’s estimates for administering the Export Working Capital Program include costs related to staffing and supporting USEACs. Although these estimates do not include the time or costs of the agency’s district office staff involved in handling export working capital guarantees, they represent the bulk of the agency’s administrative costs for the program, according to an SBA official. SBA and Eximbank provided us with written comments on a draft of this report. (See apps. V and VI.) SBA did not offer any overall comments on the draft but provided specific technical suggestions and observations to improve the clarity and accuracy of the draft. We have incorporated these changes in the report where appropriate. Eximbank generally agreed with the report. However, it disagreed with our observation that small businesses may have less access to export working capital if SBA’s program were transferred to Eximbank. Eximbank stated that its delegated authority lenders had greatly expanded the availability of its program to small businesses. Although the Delegated Authority Program has enabled Eximbank to greatly expand its program without increasing its staff, our review also identified a significant limitation that exporters may face if they must seek smaller working capital loans from delegated authority lenders. Our analysis of Eximbank data showed that, for fiscal years 1995 and 1996, only about 24 percent of the loans guaranteed through Eximbank’s delegated authority lenders were under $500,000. On the other hand, about 70 percent of the export working capital loans guaranteed by SBA were valued at less than $500,000. To understand federal and state approaches to facilitating export working capital to SMEs, we interviewed Eximbank and SBA officials and reviewed pertinent agency documents, such as working capital program instructions, summary activity reports, and related press releases. We also reviewed Eximbank and SBA documents on various arrangements aimed at facilitating export working capital guarantees, such as the Eximbank Delegated Authority Program, the Eximbank City/State Program, SBA coguarantee arrangements with states, and SBA agreements with intermediaries to package export working capital loans. To determine state efforts to facilitate export working capital for SMEs, we reviewed the National Association of State Development Agencies’ 1994 State Export Program Data (the latest comprehensive information on state’s programs available at the time of our review) as well as information from Eximbank and SBA. We identified 21 states and 1 U.S. territory using the National Association of State Development Agencies’ data and identified an additional 3 states as having export finance programs. We then surveyed and received responses from representatives at each of the 24 states and the 1 U.S. territory. To assess efforts to harmonize Eximbank and SBA’s export working capital programs, we interviewed officials responsible for administering each agency’s program and reviewed available program documents, such as operating guidelines and sample guarantee agreements. We also reviewed a consultant report, cosponsored by both agencies, aimed at evaluating the success of harmonization efforts. To identify whether harmonization efforts may have affected the level of program activity, we analyzed Eximbank and SBA data on the number and dollar value of loans guaranteed during fiscal years 1994 through 1996 as well as data on the number of lenders and exporters participating in the working capital guarantee programs. Data presented in the report on the number and value of loans guaranteed exclude those cases in which a guarantee was approved but was subsequently withdrawn or canceled. To identify the number of new lenders and new exporters, we compared agency data for fiscal years 1994 through 1996. New lenders or exporters for fiscal year 1995 were those that did not participate in the programs in fiscal year 1994 (before harmonization). New lenders or exporters for fiscal year 1996 were those that did not participate in the programs in the preceding 2 years. We did not independently verify the accuracy of the data provided by Eximbank or SBA. To identify the issues associated with expanding the number of cooperative agreements in the federal working capital guarantee programs, we focused on delegating authority to lenders and devolving greater responsibility for export working capital programs to the states. We evaluated these approaches on the basis of leveraging federal resources, SME access to export financing, and program oversight. We also examined the potential implications of transferring SBA’s export working capital program to Eximbank by focusing on SME access to export financing and lender participation. We discussed these approaches with Eximbank, SBA, and Department of Commerce officials as well as with officials of financial institutions and small business trade associations, such as the Bankers Association of Foreign Trade, Small Business Exporters Association, and the National Small Business United. To obtain lenders’ perspectives on expanding the use of cooperative agreements with banks, we surveyed the 67 lenders that were enrolled in Eximbank’s Delegated Authority Program as of February 1996 and had about an 85-percent response rate. We also surveyed all lenders that had funded at least 2 export working capital loans guaranteed by SBA during fiscal year 1995 (35 out of 150 lenders) and had an 80-percent response rate. (See app. IV for more details on the methodology for and selected results of the two lender surveys.) We obtained information on the costs associated with administering the export working capital programs to determine potential cost savings that may be derived from transferring SBA’s program to Eximbank. Since agency budget and cost data was not maintained by specific program areas, we relied on estimates provided by both Eximbank and SBA. We did our work from February to October 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees, the Administrator of the Small Business Administration, the Chairman of the U.S. Export-Import Bank, and the Chairman of the Trade Promotion Coordinating Committee. We will also make copies available to others on request. This report was prepared under the direction of JayEtta Z. Hecker, Associate Director, who may be reached on (202)512-8984 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix VII. The U.S. Export-Import Bank (Eximbank) relies heavily on its U.S. Division and delegated authority lenders for delivering its Working Capital Guarantee Program. Figure I.1 shows the locations of the U.S. Division and delegated authority lenders. The Small Business Administration (SBA) relies on the U.S. Export Assistance Centers (USEAC) and district offices to deliver its Export Working Capital Program. Figure I.2 shows the locations of USEACs and SBA district offices. Table II.1 shows the 17 states and 1 U.S. territory in our survey that have export financing programs. Table II.2 shows the eight states that offer working capital guarantees as part of their export financing programs and the number and amount of guaranteed loans. Table II.3 compares the eight state-level export working capital guarantee programs. Washington Table II.2: Leveraged Guarantee Fund and Export Working Capital Loans Guaranteed by State (fiscal years 1995 and 1996) Eximbank (107) $100 to $250 mil. $250 mil. to $1 bil. > $1 bil. During our review, we surveyed lenders participating in Eximbank’s Delegated Authority Program and lenders participating in SBA’s Export Working Capital Program. The following is a summary of our methodology and the responses we received for selected survey questions. To obtain lenders’ perspectives on the Delegated Authority Program and their willingness to provide smaller export working capital loans, we surveyed all lenders participating in the program at the time of our review. Eximbank provided us with a list of all delegated authority lenders as of February 21, 1996, and a database that contained information on all working capital loans guaranteed during fiscal year 1995 and fiscal year 1996, as of February 29, 1996. We surveyed all 67 delegated authority lenders and obtained responses from 57, for a response rate of about 85 percent. For nonrespondents, seven had not provided any working capital loans under delegated authority, and the other three had provided at least one such loan under delegated authority. We pretested the survey with four lenders, one each in Arizona, California, Texas, and Washington, obtained feedback from Eximbank on the draft instrument, and made appropriate revisions. Two of our interviewers surveyed banks by telephone between April and June 1996. In some instances, banks responded by facsimile machine. To ensure data reliability and consistency, we asked appropriate follow-up questions during our telephone interviews. In some cases, we followed up with lenders who faxed in responses, obtained information on some questions not answered, and clarified certain responses. The following are selected questions as asked, and the responses we received from Eximbank delegated authority lenders. The following is a list of factors that may be important to remaining enrolled in the program. Can you add any other factors to this list? Also, please identify the top three most important factors by placing a “1” by the most important factor, a “2” by the second most important factor, and a “3” by the third most important factor. If there is a demand, how likely would your bank be to provide export working capital loans for less than $833,333 to small businesses with whom you have an established banking relationship? (Please check one.) If there is a demand, how likely would your bank be to provide export working capital loans for less than $833,333 to small businesses with whom you do not have established banking relationship? (Please check one.) n = 56 Is there some amount under which your bank would generally not provide export working capital loans? In your opinion, how effectively would the following incentives encourage banks to provide more export working capital to small businesses in general? (Please identify any other incentives you feel would be effective and please check one box in each row.) We asked lenders participating in SBA’s Export Working Capital Program about their banks’ policies and opinions on SBA’s program. SBA provided us with a list of 150 lenders enrolled in its Export Working Capital Program as of August 1995 and a database of all loans guaranteed through the program during fiscal year 1995. During fiscal year 1995, 17 lenders did not make any loans guaranteed through the program, 98 lenders made only 1 loan, and the remaining 35 made 2 or more such loans. We surveyed all 35 participants that had funded 2 or more loans guaranteed through SBA’s Export Working Capital Program during fiscal year 1995 and received responses from 28 lenders, an 80-percent response rate. These 28 lenders had asset sizes ranging from $33 million to $44 billion and were located throughout the United States. The seven nonrespondents surveyed had asset sizes that ranged from approximately $84 million to $49 billion and were also located throughout the United States. We pretested the questionnaire with four lenders, one each in Oregon, New Jersey, New York, and Washington, D.C., in early July 1996, and made appropriate revisions. Two of our interviewers surveyed the lenders by telephone between July and August 1996 and allowed some lenders to respond by facsimile machine. To ensure data reliability and consistency, we reviewed and performed edit checks of the instruments returned by facsimile machine. The following are selected questions as asked and the responses we received from SBA lenders. Similar to SBA’s Export Working Capital Program, the Eximbank administers a working capital guarantee program. Are you aware of this program? Yes = 25 No = 3 Does your department currently use Eximbank’s working capital guarantee program? Yes = 13 No = 12 Overall, how would you characterize your department’s experience with Eximbank’s program? 1 = Very positive 7 = Generally positive 3 = Neutral 0 = Generally negative 0 = Very negative 2 = Not sure n = 13 In your opinion, which of the following reasons best explains why your department has not used Eximbank’s working capital program? (Please rank up to 3 reasons by placing “1” by the most important reason, “2” by the second most important, and “3” by the third most important.) The following are GAO’s comments on SBA’s letter dated November 26, 1996. 1. Data was not readily available for us to make a comparable statement regarding the average number of employees per company that have obtained working capital loans guaranteed from Eximbank. 2. Eximbank staff are presently located at 4 of the 19 USEACs currently in operation. Eximbank has indicated that it believes a combination of its regional office staff and city/state participants would be able to respond to regional USEAC needs for Eximbank services. The following are GAO’s comments on Eximbank’s letter dated December 2, 1996. 1. We modified the report to better highlight the difference in SBA’s and Eximbank’s credit qualification requirements. Eximbank requires borrowers to have a positive net worth; SBA does not. Eximbank’s requirement is stipulated in its working capital guarantee program instructions. Also, ProAction Agency, the consultant commissioned by Eximbank and SBA to evaluate harmonization efforts, reported in May 1996 that this requirement was a key difference between each agency’s program. Although Eximbank officials stated that they have the flexibility to waive this requirement, they acknowledged that, in practice, there were only a few instances in which this was done. 2. The default information presented in the report is not intended to demonstrate differences in default rates between the U.S. Division and the Delegated Authority Lender Program. Further, we did not include in the report any comparison of claims after a default under Eximbank’s or SBA’s program. Rather, the report provides information on the number of defaults. 3. The report recognizes that SBA’s cost estimates for administering its Export Working Capital Program do not include the time or costs for the agency’s district office staff involved in handling export working capital guarantees. It does not attempt to make a direct comparison with Eximbank’s estimated program costs. Evelyn E. Aquino, Evaluator-in-Charge José R. Peña, Evaluator May M. Lee, Computer Specialist Gerhard C. Brostrom, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the current government programs that provide export working capital for small- and medium-sized enterprises (SME), focusing on: (1) federal and state approaches for providing export working capital; (2) federal efforts to harmonize the export working capital programs of the U.S. Export-Import Bank (Eximbank) and the Small Business Administration (SBA); (3) issues associated with increasing the number of cooperative agreements with lenders and devolving greater responsibility for export working capital programs to the states; and (4) the potential implications of transferring SBA's export working capital program to Eximbank. GAO found that: (1) Eximbank and SBA have programs that provide guarantees to facilitate export working capital loans for SMEs, however, the agencies emphasize different delivery approaches; (2) Eximbank implements its program primarily through a specific division within the agency and a network of lending institutions that have been delegated with authority for approving the agency's working capital guarantees; (3) SBA relies primarily on specialists with lending authority that it has assigned to the U.S. Export Assistance Centers network and on the agency's 69 district offices to implement its working capital program; (4) both Eximbank and SBA have established other arrangements with state and local offices to help administer their working capital programs; (5) eight states have export guarantee programs specifically geared to assisting small businesses which provide a wide range of funds, staff, and activity levels involving export financing for SMEs; (6) Eximbank and SBA have harmonized certain aspects of their export working capital guarantee programs; (7) while harmonization was underway, Eximbank and SBA made other changes aimed at improving their own export finance assistance programs for small businesses; (8) these efforts to harmonize and improve their programs appear to have helped simplify the lending process, increase the number and value of loans guaranteed, and expand the number of exporters and lenders who participate in the programs, however, some program differences remain; (9) to leverage federal funds and provide SMEs with more export financing, Eximbank and SBA have set up cooperative arrangements with both the private and public sectors; (10) Eximbank also has a pilot program underway that delegates lending authority to six state export finance organizations; (11) the potential to further expand cooperative agreements would be affected by various factors; (12) the Trade Promotion Coordinating Committee proposed transferring SBA's Export Working Capital Program to Eximbank if harmonization efforts were unsatisfactory; (13) GAO identified a number of factors that would need to be considered before any transfer of program responsibility from SBA to Eximbank were to take place; and (14) these factors are: (a) some exporters currently served by SBA may not be served by Eximbank; (b) the Eximbank and its network of delegated authority lenders may not be accessible to some SMEs currently assisted by SBA; and (c) the consolidation of the programs may lead to only minimal cost savings. |
CDC issues recommendations for clinicians to follow in order to prevent and control HAIs. CDC issues these recommendations in the form of evidence-based guidelines and other informal communications, such as clinical reminders, which are generally recognized as authoritative interpretations of the current scientific knowledge base regarding the prevention of HAIs. CDC develops these guidelines in collaboration with the Healthcare Infection Control Practices Advisory Committee (HICPAC)—a federal advisory committee that provides recommendations to the Secretary of HHS and to CDC and includes members from outside the federal government selected for their expertise on infection control. In 2007, CDC issued its most recent infection control guideline outlining Standard Precautions, which serves as the foundation for preventing transmission of infections during patient care in all health care settings, and includes recommendations for safe injection practices. Examples of safe injection practices include administering medication from one syringe to only one patient, administering medications from single-dose vials to only one patient, and using bags or bottles of intravenous solution for only one patient. Additionally, CDC also helps to provide assistance to state and local health departments in their investigations of possible blood- borne pathogen outbreaks resulting from unsafe injection practices, and maintains information on blood-borne pathogen outbreaks. See 42 U.S.C. § 1395k(a)(2)(F)(i). For ASCs, CMS calls its health and safety standards “conditions for coverage.” 42 C.F.R. Part 416, Subpart C (2011). For other types of ambulatory care facilities, such as end-stage renal disease facilities, rural health clinics, and federally qualified health centers, CMS has established different standards for participation in Medicare. See 42 C.F.R. Part 405, Subpart U (for end-stage renal disease facilities) and 42 C.F.R. Part 491, Subpart A (for rural health clinics and federally qualified health centers). Medicare and qualify for Medicare facility payments. As part of the agency’s certification process, CMS contracts with state survey agencies to conduct on-site surveys of facilities subject to CMS’s standards. These surveys include on-site inspections by a survey team, generally of two or more surveyors, who review documents, interview staff and patients, observe practices, and examine medical records to ensure compliance with CMS’s standards. When surveyors find that a facility’s practices do not meet CMS’s health and safety standards, these discrepancies are cited as deficiencies and reported to CMS. Additionally, ASCs may choose to instead undergo accreditation by CMS-approved accrediting organizations that CMS has determined meet or exceed its standards. Facilities that are deemed as meeting CMS’s standards through this means are also eligible to participate in Medicare and receive facility payments. As part of this accreditation process, accrediting organizations conduct periodic on-site surveys to ensure that facilities meet their standards, including those related to infection control. Not all ambulatory care settings are subject to CMS’s health and safety standards. For example, patients may receive a wide array of services similar to those provided at ASCs, such as endoscopy and pain management services, in facilities designated as physician offices, which may range in scale from a small office facility with a single physician to a large clinic with multiple physicians and extensive medical or surgical capabilities. However, physician offices are not subject to CMS oversight, and thus these facilities do not undergo on-site surveys. In addition, even ambulatory care facilities that could potentially meet CMS’s definition of an ASC may choose not to participate in Medicare as an ASC. Consequently, these facilities would not undergo the Medicare certification or deeming processes and not receive ASC Medicare facility payments. These efforts by CDC and CMS to prevent unsafe injection practices represent efforts to change clinical practices, which research shows can be challenging. Making clinicians aware of the scientific basis for specific practices to achieve patient safety plays a role in changing their behavior, but on its own tends to bring about only modest improvement. Researchers point to other barriers that need to be overcome, including the challenge of integrating the new practice into established work flow patterns, organizational cultures in many health care settings that can be resistant to change, and the challenge of establishing open communication and accountability across distinct professional groups with differing hierarchical status, such as nurses and physicians. efforts to ensure that every clinician performs hand washing or other hand hygiene prior to contact with each patient is an example of the difficulty of achieving consistent compliance with even the most basic and noncontroversial patient safety measures. See, for example, John Øvretveit, Economics and Effectiveness of Interventions for Improving Quality and Safety of Health Care - A Review of Research (Stockholm: Medical Management Centre, Karolinska Institute, 2007). Data on the extent of blood-borne pathogen outbreaks related to unsafe injection practices in ambulatory care settings are limited and likely underestimate the full extent of such outbreaks. Additionally, comprehensive data on the cost of blood-borne pathogen outbreaks to the health care system do not exist, but CDC and other officials believe these costs can be substantial for those affected by such outbreaks, including individuals, state and local health departments, and clinicians and health care facilities. According to CDC officials and others we interviewed, there are relatively few sources of information available on the extent of blood-borne pathogen outbreaks resulting from unsafe injection practices in ambulatory care settings, and these data likely underestimate the full extent of such outbreaks. Specifically, CDC tracks and keeps records of reported blood-borne pathogen outbreaks related to unsafe injection practices in the United States, which it identifies through state and local health departments seeking investigative assistance for potential outbreaks. According to CDC records, from 2001 through 2011, there were 18 known outbreaks—episodes of infection transmission where 2 or more patients became infected—of viral hepatitis associated with unsafe injection practices at ASCs and other ambulatory care settings in the United States. In these known outbreaks in ambulatory care settings, nearly 100,000 individuals were notified to seek testing for possible exposure to viral hepatitis and HIV, and 358 of them were infected with (See app. I for more comprehensive information on the viral hepatitis.18 blood-borne pathogen outbreaks related to unsafe injection practices in ambulatory care settings.) In addition, over 17,000 other patients were also notified of possible exposure to blood-borne pathogens because of unsafe injection practices in ambulatory care settings outside of these 18 recognized outbreaks. These notification events were not identified as outbreaks because they did not meet CDC’s definition of a blood-borne pathogen outbreak, which is an episode of transmission where two or more patients became infected and where these infections could be epidemiologically linked to a specific health care facility or clinician. Our analysis of CDC’s data on the 18 known blood-borne pathogen outbreaks in ambulatory care settings indicates that these incidents were associated with one or more types of unsafe injection practices and most were related to improper use of syringes that led to contaminated medication vials or saline bags that were then reused for multiple patients (see table 1). These outbreaks were in a number of different ambulatory care facility types across multiple states. Specifically, of the 18 outbreaks, 5 occurred in pain management clinics, 5 occurred in endoscopy clinics, 3 occurred in alternative medicine clinics, and 2 occurred in hematology- oncology clinics. Additionally, two of the facilities that had outbreaks were participating in Medicare as ASCs, according to CDC officials. With the exception of these two facilities, the facilities that have experienced outbreaks were not subject to CMS’s health and safety standards, which require facilities to take steps to prevent unsafe injection practices from occurring, because they are considered physician offices. Finally, while some states may appear to have more outbreaks than others, CDC officials noted that some states are more advanced in identifying, investigating, and reporting blood-borne pathogen outbreaks than others, which may make them appear to have more outbreaks. For a number of reasons, CDC officials and others we interviewed believe that the known outbreaks do not represent the full extent of blood-borne pathogen outbreaks related to unsafe injection practices in ambulatory care settings. First, blood-borne pathogen infections, regardless of how they are contracted, can be difficult to detect. According to CDC officials and others we interviewed, as well as published literature we reviewed, blood-borne pathogen infections may go undetected because most people infected with viral hepatitis either do not have symptoms for years or have only mild nonspecific symptoms. For example, a 2010 study by the Institute of Medicine reports that about 65 to 75 percent of individuals infected with hepatitis are unaware that they are infected. Many people infected with hepatitis are not aware that they have been infected until they have symptoms of cirrhosis or liver cancer many years later. Second, when symptoms do occur, it may be too late to determine the exact incident that caused the infection. Clinicians are generally required to report cases of acute hepatitis B and C infections to their state or local health department, though this varies by state. However, according to health department officials we interviewed, tracking an infection to a specific health care facility can be difficult because treatment in a health care facility is not generally considered to be an important risk factor for these types of infections. Third, CDC officials said that while state and local health departments and even medical staff often may choose to notify CDC about potential blood-borne pathogen outbreaks, including those possibly related to unsafe injection practices, there is no requirement for such reporting. CDC officials said that the agency generally identifies that potential blood-borne pathogen outbreaks related to unsafe injection practices have occurred when state or local health departments seek CDC assistance during their investigations of potential outbreaks. However, CDC officials said that because of the variability in states’ surveillance and investigation capacity, many outbreaks may not come to the attention of the health department or CDC. Lastly, available evidence indicates that the unsafe injection practices that can cause blood-borne pathogen outbreaks may be prevalent in ASCs, which increases the likelihood that other such outbreaks are occurring undetected in addition to those that have been identified. Specifically, CDC researchers found in a 2008 survey of a randomly selected sample of 68 ASCs in three states that about 28 percent of ASCs were cited for deficiencies related to injection practices or medication handling— primarily for the use of single-dose vials for more than one patient—and about 68 percent were cited for at least one lapse in basic infection control. According to CDC officials and others we contacted, while the financial costs to the health care system of blood-borne pathogen outbreaks related to unsafe injection practices can be substantial, there are no comprehensive data on the total costs attributed to such outbreaks. CDC officials said that assessing such costs is difficult because the costs are borne by different groups—for example, individuals, state and local health departments, and clinicians and health care facilities—and the costs are often intermingled with other health care costs. However, various parties have developed estimates of some of the potential and actual costs associated with such outbreaks for each of these three groups. Individuals. For individuals who are notified that they are at risk of a blood-borne pathogen infection, costs may be incurred for testing. For example, in response to a large hepatitis C outbreak in Nevada— which required notification of more than 60,000 patients to seek blood-borne pathogen testing—the Southern Nevada Health Department estimated that the laboratory costs for testing all of the potentially exposed patients would be $13.8 million. Additionally, for individuals who are infected, costs include those for short- and long- term treatment. For example, the Southern Nevada Health Department estimated that the cost of treatment for an infected patient would be about $30,000, including the direct costs for professional services, laboratory testing, and medication, but excluding the costs of annual monitoring and possible complications related to cirrhosis or liver transplants. State and local health departments. State and local health care departments may incur costs for investigating and responding to potential outbreaks, including the costs of notifying and potentially providing blood-borne pathogen testing for patients who may have been exposed to unsafe injection practices. Generally, according to health department officials we interviewed, state and local health departments do not track such costs because investigating and responding to such outbreaks is considered part of their normal duties. One exception is the case of the Nevada outbreak, where officials said such costs were calculated because of the magnitude of the outbreak. Specifically, the Southern Nevada Health Department estimated that from January 2008 through May 2009, the outbreak investigation and response cost the health department about $830,000, including $255,605 in staff time by health department employees. Clinicians and health care facilities. Clinicians and health care facilities that are directly involved in outbreaks may incur costs associated with lawsuits and settlements. For example, following the Nebraska outbreak in 2002, the Nebraska Excess Liability Fund—a fund administered by the Nebraska Department of Insurance for medical professional liability coverage—paid nearly $9 million in indemnity costs to settle 83 cases as of December 2010. In addition, clinicians who cause blood-borne pathogen outbreaks through their use of unsafe injection practices may be at risk of losing their medical licenses or facing felony charges related to the outbreak. For example, the physician and two nurse anesthetists involved in the Nevada outbreak currently face state criminal charges tied to the outbreak. In 2009, CMS substantially expanded its oversight of unsafe injection practices in ASCs by increasing both the intensity of the examination of safe injection and other infection control practices and the number of on- site surveys conducted in ASCs to determine compliance with CMS’s health and safety standards. Within these health and safety standards, those relating to infection control specifically require ASCs to maintain an infection control and prevention program designed to minimize the occurrences of HAIs, such as blood-borne pathogen infections resulting from unsafe injection practices, and have a qualified professional direct this program. Safe injection practices are included under several of CMS’s broader health and safety standards, which also address a number of other topics related to infection control and medication administration. To document whether ASCs are following CMS’s health and safety standards related to infection control, which include safe injection practices, CMS directed all surveyors who inspect ASCs to use CMS’s surveyor instrument—the Infection Control Surveyor Worksheet. The worksheet includes a section on injection practices that separately addresses such topics as the reuse of needles and syringes as well as using single- and multi-dose medication vials for multiple patients. CMS also directed the surveyors to use a tracer methodology in conjunction with the worksheet, which according to CMS officials involves observing a patient at the beginning and end of a procedure or through his or her entire procedure. In addition, for the large majority of ASCs that are surveyed by state survey agencies—about 75 percent—CMS expanded the number of ASCs that are to be surveyed each year. Specifically, for fiscal years 2011 and 2012, CMS expects that state survey agencies will survey at least 25 percent of nonaccredited ASCs each year, an increase from its expectation that at least 10 percent of nonaccredited ASCs would be surveyed annually in fiscal year 2009, and 5 percent in fiscal year 2008. CMS also required in fiscal years 2010 and 2011 that some of the ASCs surveyed by state survey agencies be randomly selected by CMS so the agency could obtain a nationally representative sample. As part of implementing the expanded oversight of ASCs, CMS collected and plans to analyze detailed information from the Infection Control Surveyor Worksheets, but only for fiscal years 2010 and 2011. Specifically for these 2 fiscal years, CMS required state surveyors to submit a completed copy of the worksheet for every ASC that they surveyed, in addition to their routine reporting of citations for lack of compliance with particular standards. According to the CMS officials, the agency plans to use the data collected from the surveyor worksheets to determine the differences in the type and level of citations given by state survey agencies to ASCs identified as noncompliant with the agency’s health and safety standards. As of May 2012, CMS officials expected to have this analysis completed in July 2012. Additionally, CMS officials said that the agency has provided CDC with the surveyor worksheet data to examine the extent of infection control problems, including unsafe injection practices, in a sample of ASCs nationwide, from which CDC officials expect to create a baseline assessment of unsafe injection practices in these settings. As of April 2012, CDC officials did not have a firm deadline for when they plan to complete this analysis because they are uncertain of how long it will take to obtain access to usable data, but the officials expect that it will be completed at some point in 2012. Although CMS will continue to direct surveyors to use the infection control worksheet to guide what surveyors observe in conducting their examinations of ASC practices, CMS officials said that the agency decided to stop collecting data directly from surveyor worksheets after fiscal year 2011. The officials said that this decision was, in part, because of the burden that this additional data collection process placed on surveyors. According to these officials, surveyor teams—which generally consist of at least two individuals—found it time consuming to consolidate and transcribe the observations of multiple surveyors into a single document and send the consolidated worksheet to CMS, in addition to their routine reporting of citations for noncompliance with particular standards. Additionally, CMS officials said the agency did not want to burden the surveyors with collecting more information from the worksheets until CMS had analyzed the information already collected. However, without continuing to collect the data from the Infection Control Surveyor Worksheets after fiscal year 2011, CMS will lose its capacity to monitor ASC compliance specifically with respect to safe injection practices, which would be necessary to track the effectiveness of its increased efforts to prevent unsafe practices. CMS officials reported that they do not have access to information that would allow them to identify which citations stem in whole or in part from unsafe injection practices because the citation reports that are routinely submitted by surveyors after an ASC is inspected are based on standards that cover a mix of injection-related and other infection control or medication administration practices. Furthermore, the lack of the worksheet data will reduce CMS’s ability to check the accuracy and completeness of surveyor assessments of unsafe injection practices going forward. Finally, CMS’s decision to stop collecting surveyor worksheet data will prevent CDC from using these data to conduct its own analyses of the extent of unsafe injection practices in ASCs over time. While CMS has noted that collecting these data has been burdensome for surveyors, there may be various ways to ameliorate this burden so that CMS could continue to collect the information needed to track the effectiveness of its increased oversight of ASCs. For example, after 2 years of requiring a completed worksheet for every ASC surveyed, CMS could reduce the burden placed on surveyors by limiting this requirement to only those ASCs included in a random, nationally representative sample. In addition, it could adjust the size of the sample or collect the worksheet information less frequently than every year. In order to help encourage safe injection practices, various HHS agencies have developed efforts to communicate information on these practices to clinicians since our last report on HAIs was released in 2009. For example, to expand awareness and understanding of CDC’s guidelines for infection control, CDC released tools targeted to specific health care settings in 2011. These tools include a summary guide for ambulatory care settings with an accompanying checklist and an infection control and prevention plan specifically for outpatient oncology centers, both of which provide basic infection prevention guidance and reaffirm adherence to CDC’s infection control guidelines, including those related to safe injection practices. In addition to communicating information on safe injection practices through guidance documents, CDC has also been involved in communicating such information to clinicians in various health care settings through an educational campaign, called the One and Only Campaign. CDC developed this educational campaign in collaboration with the Safe Injection Practices Coalition—a partnership of health-care- related organizations that was formed to promote safe injection practices in all U.S. health care settings. Organizations participating in the Safe Injection Practices Coalition include clinician and facility associations, patient advocacy organizations, foundations, industry partners, and CDC. The campaign was developed in 2009 in response to patients who have been notified of possible exposure to blood-borne pathogens, in order to help ensure that patients are protected each and every time they receive a medical injection. The One and Only Campaign is led by CDC and the Safe Injection Practices Coalition and is funded by members of the coalition and the agency through the CDC Foundation—an independent, nonprofit organization that connects CDC with private-sector organizations and individuals to build public health programs. Since starting in 2009, the campaign’s education and awareness efforts have included developing educational materials for clinicians and patients, such as brochures, posters, a video, and a continuing education webinar on safe injection practices for clinicians. Additionally, CDC funded positions in state health departments to partner with the Safe Injection Practices Coalition to help disseminate information from the One and Only Campaign and develop state-based activities to raise awareness of safe injection practices.educational materials for the campaign, these state health department partners utilized focus groups and surveys to ensure that the contents were understandable to both clinicians and patients. According to CDC and CDC Foundation officials, the state health department partners also developed varied approaches to reach health care clinicians, such as developing work groups to target insurance companies to make them aware of safe injection practices and developing tool kits for clinicians and state and local health departments to promote safe injection practices. For example, the State and Local Health Department tool kit was released in April 2012 and includes injection safety specific resources from CDC and the Safe Injection Practices Coalition, such as an educational video, posters, brochures, as well as other resources specific to state and local health department needs, such as information on how to build a work group and working with the media. CDC and the Safe Injection Practices Coalition have used the One and Only Campaign to target certain types of clinicians and health care settings that have previously experienced blood-borne pathogen outbreaks related to unsafe injection practices as well as to focus on clinicians more broadly. For example, the Safe Injection Practices Coalition disseminated the campaign’s educational materials through the American Association of Nurse Anesthetists and the Accreditation Association for Ambulatory Health Care, both of which are coalition members. Additionally, according to CDC Foundation officials, the One and Only Campaign’s educational efforts are also focused generally on all health care clinicians, and the demand for the campaign’s educational materials does not appear to be driven by a particular group of clinician types or health care settings. For example, according to CDC nearly 50,000 people viewed the Safe Injection Practices Coalition’s continuing medical education activity on unsafe injection practices from July 2011 to February 2012. Viewers included a wide range of clinicians, such as anesthesiologists, surgeons, pediatricians, nurse practitioners, physician assistants, pharmacists, and other types of health care clinicians, although CDC does not have information on the health care settings in which these clinicians practice. Though CDC and the Safe Injection Practices Coalition have targeted the One and Only Campaign at certain types of clinicians and health care settings that have experienced blood-borne pathogen outbreaks in the past, these targeted efforts at the national level have generally not included other settings that have experienced outbreaks and are not overseen by CMS. injection practices, but the settings not overseen by CMS, such as physician offices, may be particularly at risk for unsafe injection practices because they have not been subject to CMS’s increased oversight efforts, including the use of the Infection Control Surveyor Worksheet. Furthermore, CDC does not have information on the extent to which the general efforts of the campaign have reached these settings not overseen by CMS. As a result, it is not clear if these specific settings are being reached by the campaign. According to CDC, each of the state health department partners has targeted clinicians and health care settings that were identified as problem areas in its states, which in some cases included ambulatory care settings that are not overseen by CMS. HHS, Department of Defense, and Department of Veterans Affairs, National Action Plan to Prevent Healthcare-Associated Infections: Roadmap to Elimination (Draft) (April 2012) accessed May 22, 2012, http://www.hhs.gov/ash/initiatives/hai/infection.html. and end-stage renal disease facilities.draft plan that describes various next steps to prevent HAIs in these settings and proposes measurable outcomes and 5-year goals to assess progress. For ASCs this includes continuing to disseminate evidence- based guidelines and training for infection control and safe injection practices through CDC and the One and Only Campaign. With respect to end-stage renal disease facilities, the draft plan calls for identifying the prevalence and incidence of hepatitis infections and recommendations to prevent hepatitis infections. HHS officials expect this next phase of the agency’s consolidated effort to prevent HAIs to be finalized by fall 2012. Available data from CDC, though limited, indicate that there have been repeated, widespread blood-borne pathogen outbreaks related to unsafe injection practices in the United States from 2001 through 2011. In these outbreaks patients have been infected with blood-borne pathogens— specifically hepatitis—when receiving health care in ambulatory care settings, and these infections are likely more common than is currently identified. These infections have long-term consequences that can affect a patient’s health and ultimately lead to death, and the costs to all involved can be substantial. In light of the blood-borne pathogen outbreaks that have occurred, HHS agencies have taken some steps in the last few years to help prevent unsafe injection practices that can lead to blood-borne pathogen outbreaks in ambulatory care settings. CMS has expanded its oversight of health and safety standards in ASCs in ways that should help to prevent unsafe injection practices that can lead to blood-borne pathogen outbreaks, such as by using the detailed Infection Control Surveyor Worksheet to determine if facilities are following safe injection practices. If CDC and CMS proceed with their plans to analyze data collected from these worksheets, 2 years of data that CMS has already collected will be used to establish a baseline assessment of the extent of unsafe injection practices in ASCs and help CMS assess its oversight efforts to improve infection control. However, CMS may be undermining its efforts by stopping data collection after fiscal year 2011, in part because of concerns that the time and effort required in collecting the data placed a burden on surveyors. Information provided by CMS and CDC indicate that reducing unsafe injection practices is a long-term project, and their efforts may take several years to show clear results. Without some form of continued data collection, CMS will lose its capacity to monitor ASC compliance with its health and safety standards related to safe injection practices and to monitor how well the state surveyors collect and assess information about unsafe injection practices. In addition, CDC would not have a source of nationally representative data with which to track overall trends in injection safety in ASCs. Instead of eliminating this unique source of data on injection practices altogether, CMS could address concerns regarding the burden on surveyors through other means. For example, rather than collecting the data from all surveyed ASCs, CMS could limit this data collection to a random sample of ASCs, and the size of the sample could be adjusted. In addition, it may be possible to collect the data less frequently than every year. In addition to CMS’s oversight of health and safety standards for ASCs, CDC is leading important efforts to encourage safe injection practices through the One and Only Campaign. The campaign has focused on making information generally available to all clinicians, as well as targeting some types of clinicians and health care settings that have been involved in prior blood-borne pathogen outbreaks. While raising awareness among clinicians and health care facilities will not, by itself, ensure the adoption of safe injection practices, it is an important first step. The One and Only Campaign is especially important because CMS’s oversight of health and safety standards—one primary way for HHS to influence clinicians and health care facilities to use safe practices—is only statutorily authorized for certain settings, such as ASCs. Therefore, the One and Only Campaign represents a unique opportunity to reach clinicians and facilities, such as physician offices, that are not subject to CMS’s standards. While the campaign’s efforts so far have targeted some types of clinicians and health care settings that have been involved in prior outbreaks, additional targeting of the campaign’s efforts to settings that are not overseen by CMS, such as physician offices, could help to focus available resources on the best opportunities to improve patient safety. To help strengthen HHS efforts aimed at protecting patients from infection by preventing unsafe injection practices in ambulatory care settings, we recommend that the Secretary of HHS take the following three actions: Direct CMS and CDC to work together to resume collecting data on unsafe injection practices from the Infection Control Surveyor Worksheet, or from any alternative source of comparable data, that will permit continued monitoring and assessment of unsafe injection practices in ASCs beyond fiscal year 2011. Direct CMS and CDC to use the data collected on unsafe injection practices for CMS to continue monitoring ASC compliance with health and safety standards related to infection control and for CDC to continue monitoring trends in the prevalence of unsafe injection practices in ASCs. Direct CDC to strengthen its targeting of the One and Only Campaign to health care settings that CDC has identified as having blood-borne pathogen outbreaks related to unsafe injection practices that are not overseen by CMS. We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix II. In its comments, HHS concurred with our recommendations and stated that CMS and CDC have worked together to improve injection safety practices in ASCs, as well as other settings, such as dialysis facilities, nursing homes, and hospitals. HHS stated that CMS intends to resume collection of the Infection Control Surveyor Worksheet data beginning in fiscal year 2013 for a state-stratified, randomly selected subset of ASCs surveyed in that year and repeat this sampling and data collection approximately every 3 years thereafter. Additionally, HHS stated that CMS will use the data collected on unsafe injection practices to continue to monitor ASC compliance with the agency’s health and safety standards related to infection control. HHS also believes that the data it collects can be used to assess trends in injection practices in ASCs over time. Lastly, HHS stated that CDC supports targeting the outreach of the One and Only Campaign toward specific clinician groups and setting types, though the agency further noted that broad outreach also remains critical as demonstrated by the wide variety of settings where blood-borne pathogen outbreaks and unsafe injection practices have been identified. We agree that broad outreach is important and should be ongoing; however, additional targeted outreach to settings that are not overseen by CMS represents an opportunity to help focus available resources to reach clinicians and facilities that have not been reached through other means, such as CMS’s oversight. HHS also provided us with technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Number of individuals notified 3,287 2 Suspected syringe reuse contaminating medication vials; use of single-dose vials of propofol for more than one patient hepatitis B, or both 2 Syringe reuse contaminating medication vials; use of single-dose vials of contrast, lidocaine, and sodium bicarbonate for more than one patient; failure to use aseptic technique when accessing medication vials 5 Syringe reuse; narcotics diversion by clinician 2 Suspected syringe reuse contaminating medication vials; single-dose vials of propofol used for more than one patient Florida Department of Health, unpublished data. R. D. Greeley, S. Semple, N. D. Thompson, P. High, E. Rudowski, E. Handschur, et al., “Hepatitis B Outbreak Associated with a Hematology-Oncology Office Practice in New Jersey, 2009,” American Journal of Infection Control, vol. 39, no. 8 (2011): 663-670. New York City Department of Health and Mental Hygiene, unpublished data. E. Bancroft and S. Hathaway, “Hepatitis B Outbreak in an Assisted Facility,” in Los Angeles County Department of Public Health, Acute Communicable Diseases Program, Special Studies Report 2010, 33-36, accessed June 26, 2012, http://publichealth.lacounty.gov/acd/reports/SpecialStudiesReport2010.pdf. W. Hellinger, L. Bacalis, R. Kay, and S. Lange, “Cluster of Healthcare Associated Hepatitis C Virus Infections Associated with Drug Diversion” (paper presented at the Society for Healthcare Epidemiology of America 2011 Annual Scientific Conference, Dallas, Tex. April 2004). W. C. Hellinger, L. P. Bacalis, R. S. Kay, N. D. Thompson, G. Xia, Y. Lin, Y. E. Khudyakov, and J. F. Perz, “Health Care-Associated Hepatitis C Virus Infections Attributed to Narcotic Diversion,” Annals of Internal Medicine. vol. 156, no. 7 (2012): 477-482. “2100 More Patients to Have Hep C Test,” News4Jax.com. September 20, 2010. New York City Department of Health and Mental Hygiene, unpublished data. In addition to the contact named above, Will Simerl, Assistant Director; George Bogart; Leonard Brown; Rebecca Hendrickson; Krister Friday; Eric Peterson; and Pauline Seretakis made key contributions to this report. | Recent outbreaks of blood-borne pathogens--specifically hepatitis B and C--that were linked to a specific health care facility or clinician have resulted when clinicians use unsafe injection practices. Such infections can have serious long-term consequences for patients, including cirrhosis or liver cancer. Of the known incidents of blood-borne pathogen outbreaks attributed to unsafe injection practices--which include reusing syringes for multiple patients--most have occurred in ambulatory care settings, such as ASCs and physician offices. CMS oversees injection practices by setting and enforcing health and safety standards that apply to ASCs but not physician offices. GAO was asked to examine (1) available information on the extent and cost of blood-borne pathogen outbreaks related to unsafe injection practices in ambulatory care settings, (2) the changes in federal oversight to prevent unsafe injection practices in ambulatory care settings since 2009, and (3) other federal efforts to improve injection safety practices in ambulatory care settings. GAO reviewed CDC and CMS documentation and CDC data, and interviewed officials from various HHS agencies and other stakeholders. Data on the extent and cost of blood-borne pathogen outbreaks related to unsafe injection practices in ambulatory care settings are limited and likely underestimate the full extent of such outbreaks. An agency within the Department of Health and Human Services (HHS), the Centers for Disease Control and Prevention (CDC), collects data on outbreaks identified by state and local health departments. These data show that from 2001 through 2011, there were at least 18 outbreaks of viral hepatitis associated with unsafe injection practices in ambulatory settings, such as physician offices or ambulatory surgical centers (ASC). CDC officials and others believe that the known outbreaks do not represent the full extent of such outbreaks for a number of reasons, such as infections often being difficult to detect and trace to specific health care facilities. Additionally, comprehensive data on the cost of blood-borne pathogen outbreaks to the health care system do not exist, but CDC and other officials believe these costs can be substantial for those affected. For example, individuals may face treatment costs and health departments may face costs for investigating and notifying patients of potential exposure to infection. Another HHS agency, the Centers for Medicare & Medicaid Services (CMS), has expanded its oversight of unsafe injection practices in ASCs since 2009 by requiring surveyors who inspect these facilities to use its Infection Control Surveyor Worksheet to document the extent to which ASCs are following safe injection practices and to survey more facilities to determine compliance with CMS's health and safety standards. Safe injection practices are included under several of CMS's broader health and safety standards that also address a number of other topics related to infection control and medication administration. As part of implementing the expanded oversight of ASCs, CMS collected and plans to analyze detailed information from these surveyor worksheets for fiscal years 2010 and 2011. This information will be used to assess CMS's oversight efforts to improve infection control and also allow CDC--with which CMS shared its data--to determine a baseline assessment of the extent of unsafe injection practices in ASCs nationally. However, in part because of concerns that collecting these data is a burden to surveyors, CMS officials said the agency stopped collecting data from surveyor worksheets after fiscal year 2011. Without some form of continued collection and analysis of injection safety data, CMS will lose its capacity to oversee how well surveyors monitor unsafe injection practices, and CDC will be unable to determine the extent of these practices. To improve injection practices, various HHS agencies have taken steps to communicate information on safe injection practices to clinicians. For example, CDC has developed tools to communicate its evidence-based guidelines to clinicians in ambulatory care settings. In partnership with other health-care-related organizations, CDC also developed an educational campaign--the One and Only Campaign--that seeks to broadly educate both clinicians and patients about safe injection practices. While the campaign has targeted some types of clinicians and health care settings that have experienced a blood-borne pathogen outbreak related to unsafe injection practices, additional targeted outreach is needed for health care settings not overseen by CMS. GAO recommends that HHS (1) resume collecting data on unsafe injection practices that will permit continued monitoring of such practices, (2) use those data for continued monitoring of ASCs, and (3) strengthen the targeting efforts of the One and Only Campaign for health care settings not overseen by CMS. HHS agreed with GAO's recommendations. |
The existing federal-aid highway formula is the vehicle for distributing billions of dollars annually for highway construction and repair and related activities to the 50 states, the District of Columbia, and Puerto Rico (hereafter called the states, unless otherwise noted). Since the mid-1980s, a number of organizations (including GAO) have suggested fundamental changes in the formula for apportioning these federal-aid funds because of perceived problems with the formula, such as its reliance, at least in part, on outdated data. Section 1098 of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) tasked GAO with reviewing the process for distributing highway funds to the states. Chapter 2 of this report evaluates the current apportionment formula. Chapter 3 discusses a process by which the Congress may reconsider the formula during the next reauthorization of the federal-aid highway program and comments on the advantages and disadvantages of several alternative formula options. ISTEA authorized funding to sustain and enhance the nation’s surface transportation infrastructure. The act provided an unprecedented authorization of $122 billion for highways, bridges, and related activities for fiscal years 1992-97. Figure 1.1 shows the annual authorization for federal highway funding since 1987 and demonstrates the dramatic increases effected under ISTEA. Except for a few minor deductions, such as those for federal administrative expenses, federal highway funds are provided to the states through the Federal Highway Administration (FHWA), which is part of the U.S. Department of Transportation (DOT). The money is distributed to the states through various formula calculations and, to a lesser extent, through congressionally designated projects. ISTEA’s authorization is funded primarily through federal highway user taxes such as those on motor fuels (gasoline, gasohol, and diesel), tires, and trucks. Funds from these sources are collected from users and credited to the Highway Trust Fund for highway and mass transit projects or related activities. The fund is divided into a highway account and a mass transit account. DOT forecasts that the income to the highway account will total $20.5 billion in fiscal year 1996. Before ISTEA, the federal-aid systems—designated routes on which federal funds may be used—were at the core of the federal-aid highway program. Designation of a road as part of a federal-aid system does not mean the road is owned, operated, or maintained by the federal government. The designation is simply the first step in establishing the eligibility of selected state and local roads for federal assistance. Previously, federal aid was apportioned to Interstate, primary, secondary, and urban highways. ISTEA, however, discarded this approach by creating only two systems: the National Highway System (NHS) and the Interstate System, which is a component of the NHS. The NHS is the centerpiece of ISTEA, and the system is expected to be the major focus for the federal-aid highway program into the 21st century. In a speech on December 9, 1993, the Administrator of FHWA noted that since the Interstate was begun in 1956, the nation’s population has grown and shifted, the economy has changed, and needs are different. To serve these needs—to extend the benefits of the Interstate system to areas not served directly by it—the NHS was conceived as a way of focusing federal resources on the nation’s most important highways. DOT, working cooperatively with state and local officials as well as the private sector, proposed to the Congress in December 1993 an NHS network of about 159,000 miles. This network is about 17 percent of the approximately 950,000-mile federal-aid network and includes only 4 percent of the approximately 4 million miles of public roads. However, this system would handle about 40 percent of all vehicle miles traveled and accommodate over 70 percent of all commercial truck traffic. For other roads eligible for federal assistance, a program with the characteristics of a block grant, the Surface Transportation Program (STP), provides financial assistance. In addition, ISTEA continued authorizations for a separate Bridge Replacement and Rehabilitation Program and Interstate Maintenance Program and an array of other separate highway program initiatives as well as funding categories addressing various equity issues, such as each state’s share of funding as compared with what it received in past years. ISTEA broadened the overall goals of surface transportation. Previously, the federal-aid highway program had focused on completing and preserving the Interstate Highway system and on maintaining other federal-aid highways as well as bridges eligible for federal funds. While these goals remain a part of the overall surface transportation program, ISTEA broadened the goals and included new programs, planning processes, and management systems that are intended to help ensure that the states’ transportation plans are intermodal (that is, coordinate various modes of transportation), environmentally sound, and energy efficient. For example, the Congestion Mitigation and Air Quality Improvement Program (CMAQ) directs funds to transportation projects in clean air nonattainment areas—areas that have not achieved federal standards for air quality. ISTEA also provides for increased emphasis on mobility for the elderly, disabled, and economically disadvantaged. ISTEA expanded the use of equity adjustments for the apportionment of federal-aid funds among the states. For example, it modified minimum allocation funding. It also created “hold harmless” funding, which establishes the state’s share of overall federal highway apportionments. These adjustments, which are more fully explained in chapter 2, are generally used to increase the states’ return on their contributions to the Highway Trust Fund. ISTEA also embodied quality-of-life objectives, stating that the nation’s transportation system should be economically efficient and environmentally sound, provide the foundation for the nation to compete in the global economy, and move people and goods in an energy-efficient manner. Additionally, ISTEA’s emphasis is intermodal—providing links in a seamless intermodal network that will enhance economic growth, international competitiveness, and national security. The NHS is expected to reflect this emphasis. Section 1098 of ISTEA tasked us with reviewing the process for distributing highway funds to the states. In discussion with the congressional committees identified in section 1098, we agreed to address (1) the way the formula works and the relevancy of the data used for the formula and (2) the major funding objectives implicit in the formula and the implications of alternative formula factors for achieving these objectives. To understand the evolution of the current formula and assist in clarifying the process by which alternative formula options might be crafted, we reviewed the history of the federal-aid highway programs. Key documents included Development and Evaluation of Alternative Factors and Formulas, published by Jack Faucett Associates in December 1986; Review and Analysis of Federal-Aid Apportionment Factors, a 1969 paper prepared in FHWA’s Policy Planning Division; Alternative Financial Formulas for Allocating Federal Highway Funds, a 1990 report by the American Association of State Highway and Transportation Officials (AASHTO); Moving Ahead—1991 Surface Transportation Legislation, a 1991 report by the congressional Office of Technology Assessment; and a report we published in March 1986. To understand how the current formula works and the ramifications of possible changes, we reviewed an FHWA publication, Financing Federal-Aid Highways, published in 1992, and discussed the formula with FHWA’s Office of Fiscal Services, which is responsible for making formula apportionments to the states. We interviewed officials from FHWA’s Legislation and Strategic Planning Division, Office of Highway Information Management, and Bridge Division. We also solicited the states’ views on the current formula and on future apportionment issues in meetings with state transportation officials from 34 states and the District of Columbia at regional or national transportation meetings in Atlanta, Chicago, and Detroit. We also held meetings with state transportation officials in our Washington, D.C., offices. In addition, we met with representatives from various transportation organizations, including the American Association of State Highway and Transportation Officials and the Surface Transportation Policy Project. While we focused our review on existing, overarching highway objectives, new components could be added to recognize the states’ capacity to fund highway needs from state resources, the states’ level of effort in meeting their own needs, and geographic differences in the cost of maintaining existing highway networks. Although similar factors have been applied in other programs, they have not been applied to highway programs in the past. But FHWA did provide a report to the Congress in 1994 that addresses measures for assessing how much of their available resources the states or local areas devote to surface transportation. Finally, working with FHWA’s Office of Policy Development, we analyzed the effect of a series of hypothetical changes to the current formula. This analysis was based on comparing the actual fiscal year 1995 funding that the states received with what they would have received under the various alternatives. In this analysis, the states’ contributions to the Highway Trust Fund were based on estimates for fiscal year 1993—the most recent year for which data were available at the time of our analysis. We performed this review in accordance with generally accepted government auditing standards. We conducted our review from January 1994 through October 1995. Although federal-aid highway funds are apportioned among the states in 13 funding categories, four programs—Interstate Maintenance, Bridge Replacement and Rehabilitation, the NHS, and the STP—accounted for 70 percent of the funds apportioned in fiscal year 1995. While each state’s share of funds is calculated annually for each of these separate programs, these separate calculations are essentially meaningless since the total funding for the four programs is fixed over the 6-year authorization period for ISTEA. Consequently, the total funding for the four programs does not respond to changing conditions in a state, such as increased highway use. Furthermore, the factors underlying the distribution of highway funds to the states, such as land area and postal mileage, are generally outdated and often do not reflect the extent or use of the nation’s highway system. Our March 1986 report and a study commissioned by FHWA from a contractor noted that alternative factors, such as lane miles, are more closely aligned with highway needs. The Congress has used funding adjustments to improve equity among the states. These equity adjustments, which occur towards the end of the 13-step apportionment process, increase the total amount of funds for eligible states. In fiscal year 1995, the equity funding categories increased the amount of federal highway funds apportioned to 41 states and the District of Columbia. The amount of funding that the majority of states received through the highway formula process was therefore ultimately increased by these equity adjustments. The Congress can further adjust the federal highway funds a state receives by authorizing specific projects, commonly referred to as demonstration projects. Funding for these demonstration projects is not distributed by formula. Rather, the Congress requires that particular projects receive a specified amount of funding. In ISTEA, for instance, the Congress provided $6.2 billion in funds for over 500 demonstration projects over the 6-year authorization period. The formula for apportioning federal-aid highway funds established in ISTEA is a complex arithmetic tool used by FHWA to determine each state’s share of the funds. On the basis of the formula, funding is provided for eight programs, including the NHS and STP, and for five separate mechanisms to raise individual states’ funding levels to achieve certain goals for equity among the states. The calculations that determine the level of funding that each state receives for these various categories occur in a strict sequence, as illustrated in figure 2.1. During the first step of the calculation, for example, funding is provided to complete the construction of the Interstate Highway System. Funding for the other program categories is also based on separate calculations. However, as depicted in figure 2.1 and discussed later in this report, the funding for four programs—Interstate Maintenance, Bridge Replacement and Rehabilitation, the NHS, and the STP—is interdependent since a state’s total share of funding for all four programs is fixed. Later steps in the formula’s calculation provide additional funding to certain states; these funding categories are legislatively designated as equity adjustments. Equity adjustments generally address the concerns of states that contribute a greater share of highway user taxes than they receive in federal-aid highway funds. Equity adjustments also provide each state with the same relative share of overall funding that it received in the past. In fiscal year 1995, these equity adjustments represented 16 percent of the total funds apportioned. Figure 2.1 outlines the sequence of the equity adjustments and program funding categories (app. I provides additional details). However, DOT has proposed changes to the existing equity adjustments and program categories. The changes, proposed in DOT’s fiscal year 1996 budget justification, were preceded by the statement that if less federal money will be invested in transportation, state and local governments need to have greater authority and flexibility to decide which projects are most important. DOT has stated that it will provide an authorization proposal for such changes at an appropriate time. While apportionments for highway programs are based on individual calculations, for some programs the dollar amount apportioned by formula has little practical meaning because the states have substantial flexibility to transfer funds from one program category to another. For example, with the Secretary of Transportation’s approval, up to 100 percent of a state’s apportionment for the Interstate Maintenance and NHS programs can be transferred to the state’s surface transportation program. In addition, ISTEA’s flexible funding provisions have allowed decisionmakers at the state and regional level to decide for themselves whether to allocate transportation funds to highway or transit projects. ISTEA provided for a potential $70 billion in such flexible funding for transit or highway projects over 6 years. According to DOT’s preliminary data through the end of fiscal year 1995, $2,160.6 million in highway funds had been transferred to transit projects and $2.2 million in transit funds had gone to highway projects. In fiscal year 1995, 70 percent of the funding under the formula went to the four largest programs—the Interstate Maintenance Program, Bridge Program, NHS, and STP. Separate calculations determine each state’s share of funds for the Interstate Maintenance Program, Bridge Program, and NHS. Nonetheless, these program-specific calculations are essentially meaningless because each state’s ultimate share of funding for all four programs is fixed. With a few minor adjustments, these fixed shares derive from the shares of funds that the states received, on average, during fiscal years 1987-91 for the predecessor programs that ISTEA consolidated into these four new programs. As a result, the states’ funding shares for the four major programs are divorced from current conditions, as the states’ current and future shares of total funding for these programs must equal the adjusted historical shares. In practice, the states’ funding shares for these programs remain fixed over time because the final program included in the four-part calculation—the STP—behaves as an adjuster. The states’ funding levels for the other three programs—the Interstate Maintenance Program, Bridge Program, and NHS—are independently calculated on the basis of factors specific to each program. After those calculations are completed, however, each state’s STP funding is determined by simply taking the difference between (1) the state’s predetermined share of the total funding available for the four programs and (2) the amount the state is actually scheduled to receive for the three independently calculated programs. This means that any annual increase or decrease in a state’s funding for the Interstate Maintenance Program, Bridge Program, or NHS must be offset by a corresponding, reciprocal change in the STP funds the state receives for that same year. Figure 2.2 illustrates this zero-sum game through a hypothetical example involving 2 years and two states. In the example, both State A and State B experience shifts in their apportioned funding between fiscal years 1993 and 1994. State A loses funds for the Interstate Maintenance and Bridge programs, while State B gains funds in both of these categories. However, for both states these shifts are rendered irrelevant because they are offset by a corresponding change in the states’ STP funding levels. As a result, State A has 1.9 percent of the total funding available for the four programs in both fiscal years, despite its losses in funding for the Interstate Maintenance and Bridge programs. State B is locked into a 1.75-percent share in both years, despite its gains in funding for the Interstate Maintenance and Bridge programs. Not only is the total funding for the four major programs fixed over the life of ISTEA, but the funding for the two largest programs—the NHS and STP, together accounting for 40 percent of all the funding apportioned in fiscal year 1995—is based, in part, on underlying factors that are largely irrelevant to the highway system’s needs. As we reported in March 1986,the factors that influenced the historical targets for funding in the federal-aid highway program—land area, postal mileage, and population—are not closely related to the highway system’s needs. Furthermore, our March 1986 report and an FHWA-sponsored study indicated that alternative factors, such as lane miles and annual contributions to the Highway Trust Fund, are more closely aligned with highway needs. In our March 1986 report, we found that the factors used to apportion certain highway funds—land area, postal mileage, and population—were not closely related to the highway system. At the time of our report, the data on which these factors were based were already between 40 and 70 years old. Specifically, the report detailed the following problems with the factors: A state’s land area was originally included as a factor in the distribution formula in 1916. Land area was thought to provide a balance for the factor based on population and to reflect a state’s future highway needs. However, this approach resulted in large but sparsely populated states receiving larger apportionments than they otherwise would have. In addition, land area no longer bears a close relationship to future highway needs, namely the need for new construction, since the highway system is no longer growing rapidly throughout the country. Postal mileage was included as a formula factor in 1916 to provide a constitutional justification for federal involvement in highways (the power to establish post offices and post roads). By 1919, changes to the highway legislation had ended the need for this justification. In addition, since postal mileage is computed on the basis of the distance traveled both on and off the federal-aid highway system, it is unrelated to either the extent of the federal-aid highway network or its use. Population figures for formula use were derived every 10 years from the census. As a result, changes in the states’ populations were accounted for only at 10-year intervals. This problem has been exacerbated under ISTEA, since the population data underlying the states’ historical shares for ISTEA’s major funding calculations are, in part, based on 1980 population data, not the more current 1990 data. In our March 1986 report, we also identified those factors previously suggested to the Congress as consistent with basic federal highway programs and for which data were available. Our results supported lane miles as a direct measure of the size of the road network and thus as a reflection of the extent of the system to be preserved. In addition, we found that vehicle miles traveled and motor fuel consumption reflected the extent of highway use. We recognized that each of these factors has its own advantages and disadvantages in establishing a formula. (The advantages and disadvantages of certain formula factors are addressed in ch. 3.) Finally, we recognized that changing the factors used in certain highway apportionment formulas would result in some states’ receiving more or less funds than they did under the then-current formulas. We suggested that to lessen these impacts, a transition period could be provided during which the full effect of the formulas would be gradually introduced. However, the Congress elected not to change the basic formula structure. In December 1986, Jack Faucett Associates, a consultant for FHWA, issued a report evaluating alternative apportionment formulas for highway funds that included correlation analysis. Using this tool, the report showed, for example, that a state with a large number of vehicle miles traveled on the Interstate would also have a high requirement for repairs to the Interstate. Similarly, the states that contributed large amounts of revenue to the Highway Trust Fund, reflecting substantial use of motor fuels, were also shown to require more repairs of the Interstate. The correlation analysis was reported in terms of values between zero and one. The closer the value is to 1, the closer the correlation between the factor and the need for repairs to the Interstate. Table 2.1 shows the correlation between selected apportionment factors and the states’ need for Interstate repairs, as reported. As table 2.1 indicates, the highest correlations—at least 0.900—existed for vehicle miles traveled and annual contributions to the Highway Trust Fund. Interstate lane mileage, contributions to the Highway Trust Fund over time, and total motor vehicle registrations also showed fairly strong correlations with the need to repair the Interstate. This was not the case, however, for weather-related variables or per capita income. Furthermore, the strong correlations between certain of these factors and major needs for repair diminished for federal-aid highways other than the Interstate. Equity adjustments were designed to address the concerns of the states that contribute a greater share of highway user taxes than they receive in federal-aid highway funds. In addition, another adjustment provides each state with the same relative share of overall funding that it received in the past. The three equity adjustment categories described below—Minimum Allocation, 90 Percent of Payments Adjustment, and Donor State Bonus—address the concerns of those states that contribute more in highway user taxes than they receive in federal-aid highway funds: The Minimum Allocation guarantees a state an amount such that its percentage of the total apportionments and prior-year allocations from certain highway funding categories is not less than 90 percent of the state’s estimated percentage of contributions to the Highway Trust Fund’s Highway Account. The 90 Percent of Payments Adjustment ensures a state that selected apportionments for the fiscal year and allocations in the previous fiscal year will equal at least 90 percent of its contributions to the Highway Trust Fund’s Highway Account. The Donor State Bonus, as implemented by FHWA, compares each state’s projected contributions to the Highway Trust Fund in the fiscal year with the apportionments that the state will receive in that fiscal year. Starting with the state having the lowest return (apportionments compared with contributions), each state is brought up to the level of return for those states with the next highest level of return. This process is repeated successively for each state until the funds authorized for this funding category in that fiscal year are exhausted. Finally, a fourth adjustment category, referred to as Hold Harmless, addresses a different objective—preserving the states’ historical funding share, recognizing the legislative compromises embedded in ISTEA. ISTEA established a percentage for selected apportionments and prior-year allocations that each state must receive annually. For example, this legislatively prescribed funding percentage is 1.74 for Alabama, 0.41 for Delaware, 0.69 for Idaho, 3.72 for Illinois, and 4.36 for Massachusetts. These funding percentage shares can result in a state’s receiving an addition to the regular apportionments, so that the state’s total apportionment will equal the established percentage. As figure 2.1 showed, the calculations that determine the level of funding that each state receives for the various funding categories occur in a strict sequence. All of the equity adjustments come into play late in the sequential calculation. Therefore, these adjustments essentially increase the funding calculated for a state up to that point. For example, if a state is hypothetically entitled to a total apportionment of $500 million on the basis of the Hold Harmless provision, it will receive that amount regardless of whether all the calculations up to that point yielded a total of $200 million, $300 million, or $400 million. In fiscal year 1995, equity adjustments accounted for $2.8 billion (16 percent) of the approximately $18 billion distributed to the states. Only nine states—Colorado, Connecticut, Hawaii, Maryland, Pennsylvania, Rhode Island, South Carolina, Virginia, and Washington—and Puerto Rico did not receive funding through equity adjustments in fiscal year 1995, as highway apportionments for each of these jurisdictions met all of ISTEA’s stated equity criteria on the basis of the funding for the programs alone. For the other 41 states and the District of Columbia, the total amount of federal highway funding apportioned in fiscal year 1995 was ultimately increased by equity adjustments. Funding for demonstration projects is distinct from apportionments to the states in that the authorized funding for such projects is not distributed by formula. Rather, the Congress directs how certain funds are to be distributed by requiring that particular projects receive a specified amount of funding. Funding for such projects is authorized by the congressional committees with jurisdiction over highway appropriations and authorizations. The amount of federal funds authorized for demonstration projects has grown since 1982. ISTEA alone authorized over $6.2 billion over 6 years for 539 demonstration projects. While some demonstration projects address critical transportation problems and can be considered nationally significant, authorizing a large number of such projects could prove troublesome. As we noted in a 1991 report and testimony in 1993 and 1995 before the Subcommittee on Transportation, House Committee on Appropriations, demonstration projects often cost more than expected. In our 1991 report, we found that for 66 projects reviewed, the federal funding and state matching funds together accounted for only 37 percent of the projects’ total anticipated costs. Future finances could be drained if extra federal funds are needed to cover the cost of completing the projects. Demonstration projects can also yield a low payoff for a variety of reasons, including the fact that they frequently are not aligned with the states’ transportation priorities, can languish in the early stages of project development, or may never get started at all. For instance, in our 1991 report, we found that for 22 of the 66 projects reviewed, none of the authorized funds ($92 million) had been obligated, even though the projects had been authorized 4 years earlier. Figure 2.3 depicts the funding to each state for highway programs, and, if applicable, any modifications to that funding realized through either equity adjustments or funding for demonstration projects provided under ISTEA in fiscal year 1995. ISTEA authorized approximately $120 billion for highway construction and repair and related activities over 6 years, emphasized quality-of-life and intermodal objectives, revamped major highway programs, and offered states and localities unprecedented opportunities to use federal highway and mass transit capital funds across modal lines. But the factors underlying the distribution of funds for two of the largest highway programs—the NHS and STP—essentially remained the same, since each state’s funding was to be based on the historical share of funds the state received from major programs before ISTEA was enacted. Locking in the status quo on the basis of historical funding averages has also been supported through two other funding avenues. First, a state’s total funding share for the four largest programs is fixed over the life of ISTEA. Second, the Hold Harmless equity adjustment category serves to raise the states’ ultimate level of annual funding to a predetermined percentage share of the total funding available. These percentage figures, which are spelled out in ISTEA and remain fixed for the act’s duration, were derived primarily from historical averages rather than current circumstances. For major highway programs, the data underlying the distribution of highway funds to the states are generally outdated, unresponsive to changing conditions, and often not reflective of the nation’s highway system or its usage. Furthermore, as mentioned above, because the percentage share is fixed for the four largest programs, any updated data that are factored into the calculation for two of these programs are negated. On the basis of our analysis and discussions with federal and state transportation officials, ISTEA’s myriad objectives for highways can be placed into four overarching categories: (1) maintaining and improving the highway infrastructure, (2) returning the majority of funds to the state where the revenue was generated, (3) fostering social benefits, and (4) safeguarding the states’ historical funding shares. The first two objectives translate into formula components that are at the core of the distribution process. Addressing the states’ highway needs, such as the miles of highway in need of repair and the deterioration of the highway associated with traffic loads, is a primary objective in the distribution of highway funds. The second component, calling for a return of funds to the state in which they were generated, supports a congressional objective of having the states receive a substantial return on the federal fuel and other tax receipts that they generate and contribute to the Highway Trust Fund. The third and fourth objectives discussed in this chapter could be met through formula components that would distribute funds set aside from the regular apportionment process. A portion of formula funding could be devoted to social goals by, for example, directing a portion of funding to selected purposes such as improving air quality and conserving energy. Finally, a share of funding could be set aside and used to protect the states’ historical funding shares. The formula objectives may be used singly or in combination and may further be targeted to specific program categories—such as the NHS—that are deemed to merit special attention. While we focus in this report on the existing, overarching highway objectives, new components could be added to recognize the states’ capacity to fund highway needs from state resources, the states’ level of effort (LOE) in meeting their own needs, and geographic differences in the cost of maintaining existing highway networks. Although similar factors have been applied in other programs, they have not been applied to highway programs in the past. But FHWA did provide a report to the Congress in 1994 that addresses measures for assessing how much of their available resources the states or local areas devote to surface transportation. (App. III provides additional details on FHWA’s study.) The task of revising the formula for distributing highway funds will be difficult because needs vary across the country and objectives conflict among themselves. For example, relief of congestion is more pressing in urban areas of the country, whereas connecting rural areas is more pressing in sparsely populated areas. The analysis presented in this report is intended to provide the Congress with formula alternatives that reflect the key objectives governing the current federal-aid highway formula. Many individual factors making up a formula are capable of supporting the principle of distributing funds on the basis of the states’ relative needs. One possibility would be to use factors that relate to the states’ actual needs, such as the states’ miles of poor pavement or number of deficient bridges. In this approach, the states with the poorest highway conditions would be granted a larger share of the funds than the states with better highway and bridge conditions. However, a formula based on direct measures of need could prove problematic. The use of actual needs can foster a perverse incentive by potentially encouraging the states to permit their highway infrastructure to worsen in order to capture a greater share of federal highway funds.Moreover, this approach would reward the states with the poorest highway and bridge conditions while penalizing the states that have maintained these structures. In addition, the condition of highways and bridges varies considerably among the states. For instance, as of December 1993 the percentage of deficient bridges on the NHS ranged from a low of 8 percent in North Dakota to a high of 64 percent in Massachusetts. The disadvantages of basing a formula on actual needs can be remedied through the use of proxies of need, such as those reflecting the extent or usage of a highway system, or more highway-neutral measures such as population. Such proxies have the advantage of being relatively objective and neutral. However, there is debate among the states and other transportation experts on what factors can appropriately serve as proxies for distributing highway funds. Some insight can be gained from the Faucett study performed for FHWA in 1986 and discussed in chapter 2. This study indicated a strong correlation, particularly for repairs of the Interstate, between highway needs and lane miles and vehicle miles traveled. The following sections discuss the advantages and disadvantages of certain proxies in more detail. The primary measures of the extent of the federal-aid highway network are center-line miles and lane miles. Center-line miles reflect the length of the system, whereas lane miles represent the number of lanes per section multiplied by the actual length of the section. For example, a four-lane section that is 2 miles long would equal 2 center-line miles or 8 lane miles. Some states believe that center-line miles, not lane miles, are a more appropriate factor for distributing highway funds. For instance, transportation officials from Idaho, Montana, North Dakota, South Dakota, and Wyoming told us that to the extent that road mileage is considered in a formula, center-line miles more accurately reflect interconnectivity on a national and regional basis. However, while center-line miles accurately depict overall connections though a linear measurement of highways, this measure does not capture any information on the various widths of highways, because a two-lane highway and an eight-lane highway are considered equal under this measure. The width and length of highways is reflected in lane miles, and as we noted in our 1986 report, lane miles are a good measure of the extent of the highway system (capital stock) to be preserved. In addition, using lane miles as a factor for apportioning highway funds was endorsed by a Policy Review Committee of the American Association of State Highway and Transportation Officials (AASHTO) in fiscal year 1991. As the committee noted, lane miles are a direct measure of the extent of public roads in both rural and urban areas. The committee further noted that a measure of lane miles is probably the simplest and most efficient potential apportionment factor on which to obtain accurate information and that annual data are generally available within 6 to 9 months of the close of the calendar year. Regardless of whether center-line miles or lane miles are used to indicate the extent of a system, some observers criticize the use of mileage for apportioning future highway funds because such usage could reward expansion of the system. Thus, this type of apportionment factor would tend to encourage more highway construction, to the possible detriment of adequately preserving the existing network and of considering air quality. Several actions could be taken to counterbalance such tendencies. First, as part of the third component of the formula framework discussed later in this chapter, set-asides could be established to reward those states that meet certain preservation or maintenance goals. Second, greater use of performance measures geared to preserving the existing infrastructure would help FHWA ensure that the states do not neglect needed preservation and maintenance. As we noted in our July 1994 testimony before the Senate Committee on Environment and Public Works, performance expectations need to be established for preservation and maintenance and other important goals for the NHS. A well-maintained system is the necessary foundation for pursuing the myriad goals for the system, which include economic development, enhanced mobility, and improved air quality. Without such a foundation, system enhancements such as alleviating congestion and improving the efficient movement of goods may not be fully realized. While measures of a system’s extent provide part of the story on highway needs, the condition of the road is also an important element. Condition can be captured by measures of the use of a system, as distinct from the extent of the system. A system’s usage is typically gauged using factors such as vehicle miles traveled or consumption of motor fuel. One advantage of using data on the vehicle miles traveled as a formula factor is that they tend to be quite reliable. The AASHTO Policy Review Committee observed that data on vehicle miles traveled have been statistically designed for a high level of measurable accuracy and are relevant as an indicator of both capital and system preservation needs. Also, in the Faucett study, vehicle miles traveled garnered one of the highest correlation values, 0.913, of all the factors related to Interstate repair needs. That is, a state with a high number of vehicle miles traveled would also likely have high needs for repair of the Interstate. Another proxy of system use is motor fuel consumption. Motor fuel consumption reflects travel on all roads, not just on the federal-aid system or on roads under a state’s jurisdiction. Therefore, it would not be a precise measure for apportioning funds to specific groups of roads. These data are reported by states monthly and adjusted at year’s end. Annual data are generally available within 6 to 9 months of the close of the calendar year. Fuel consumption patterns may differ across states because of the urban-rural population mix, the amount of travel done under congested conditions, differences in physical terrain, and fuel purchases by transients in those states with lower fuel taxes, among other things. While vehicle miles traveled and motor fuel consumption correlate well with system usage, they do have some drawbacks. For example, vehicle miles traveled measure the vehicles moved rather than the people and do not account for different vehicle classifications. Moreover, both factors are largely at odds with air quality objectives, and the principle of rewarding motor fuel consumption with more highway funding also conflicts with the goal of encouraging energy conservation. New Jersey transportation officials, for instance, noted that such factors reward energy consumption and air pollution and penalize those who successfully enact measures to reduce the use of single-occupant vehicles. Similarly, transportation officials from several other states noted that the Congress has previously rejected the notion of giving vehicle miles traveled greater weight in apportioning funds, in part because of the strong environmental objections raised. As in the case of the factors related to the system’s extent, the disadvantages associated with measures of the system’s usage could be at least partially counteracted by building incentives into the formula or by creating appropriate performance standards. A host of other factors—such as population, climatic conditions (daily mean temperature, annual snowfall, and annual precipitation), and per capita income—could also be used to determine how highway funds are distributed. Yet, as the Faucett study demonstrated, a low correlation exists between highway needs as reported by FHWA and climatic variables and per capita income. The Executive Director of the Surface Transportation Policy Project supports the use of population levels for distributing highway funds. The Executive Director stated that to the extent that the formula uses factors such as vehicle miles traveled, lane miles, and fuel consumption, it encourages behavior that runs counter to the objectives of reducing congestion and improving air quality. In his view, population and population density would be preferable alternatives as proxies. These proxies were recommended because they were perceived as avoiding the perverse effects tied to a system’s extent and usage, and because the data are sound. Population data, however, also have limitations. As noted by the Executive Director of AASHTO, the link between population and the states’ highway needs is questionable. First, while funds would be targeted to congested urban areas, the approach would do little to accommodate the needs of rural areas. Second, the approach does not recognize that goods produced in sparsely populated areas ultimately must be transported to dense areas. And some state transportation officials from sparsely populated states believe that much of the traffic that occurs in densely populated areas is local. Officials from these states maintain that the promotion of interstate commerce should be a principal objective of the federal-aid highway program and that federal funds should target the highways that tend to carry national, not local, traffic. As we reported in March 1986, factors reflecting a system’s extent and use in isolation do not provide a complete picture on the states’ needs. Combining such factors helps to round out the formula’s capacity to reflect the states’ total needs. Introducing neutral factors, such as population, into the formula further diversifies the mix of factors and alters the amounts the states receive. The analysis that follows focuses on two possible blends of proxies for need. Table 3.1 provides an outline of the factors to be considered in the two alternatives. The first alternative assumes that 100 percent of total highway funds are distributed to the states based equally on total lane miles and total vehicle miles traveled. Under this alternative, 13 percent of the overall highway funds would be redistributed. Twenty-three states and Puerto Rico would receive more funds than they were apportioned in fiscal year 1995. The average dollar gain would be $102 million; $643 million would be the high end of the range (California), and $7 million would be the low end (North Carolina). Twenty-seven states and the District of Columbia would receive less funding. For these recipients, the average loss would be $87 million; the greatest loss would be $417 million (Pennsylvania), and the smallest loss would be $5 million (Oklahoma). (State-by-state details are provided in app. IV.) The second alternative assumes that 100 percent of the total highway funds are returned to the states based equally on total lane miles, vehicle miles traveled on the Interstate, and state population. Under this approach, 10 percent of overall highway funds would be redistributed. Twenty-six states and Puerto Rico would receive more funds than they were apportioned in fiscal year 1995. The average dollar gain would be $73 million; the high end of the range would be $366 million (California), and the low end would be $2 million (Nevada). Twenty-four states and the District of Columbia would receive less funding. For these states, the average loss would be $79 million; the greatest loss would be $359 million (Pennsylvania), and the smallest loss would be $1.6 million (Alabama). (State-by-state details are provided in app. V.) One indicator of need that we intentionally omitted from the above discussion is that of the states’ contributions to the federal Highway Trust Fund. As a formula factor, these contributions have a special status because they align with two key objectives of the highway program. Not only do contributions to the Trust Fund correlate strongly with highway needs, particularly for major highways, but the states’ returns on these contributions have also been considered a key measure of equity. For years, the highway apportionment formula has endorsed, through one or more equity adjustments, the principle that the states ought to receive back a substantial portion of what they deposit into the Trust Fund. If the formula were restructured to encompass a pure return-to-origin approach, each state’s contribution to the Trust Fund would simply be returned to that state. This does not currently occur. FHWA’s data indicate that in 1993, federal highway apportionments as a percentage of the states’ contributions to the Highway Trust Fund’s highway account ranged from 83 percent for South Carolina to 707 percent for Hawaii. FHWA estimates the states’ contributions to the Trust Fund, which derive from various federal excise taxes such as the gasoline and diesel tax. Because the majority of revenues credited to the Trust Fund derive from the federal fuel tax, the states’ contributions to the Trust Fund tend to be quite closely linked with fuel consumption. As a potential formula factor, these contributions therefore offer the same kinds of advantages and disadvantages as fuel consumption does. Returning the states’ contributions to the Highway Trust Fund to their source is a relatively simple and direct way of distributing these funds. Some state transportation officials could be expected to support this approach because it would guarantee that all or a substantial amount of the revenues collected in their states would be returned to them. An advantage of returning funds to their source is that, as the 1986 Faucett study shows, contributions to the Highway Trust Fund tend to correlate highly with highway needs, particularly for major highways. However, the return-to-origin approach would not be universally attractive, as a number of states would lose funds. For instance, those states whose fuel usage is low relative to their land area and extent of highway network would be financially hurt. A prime argument made by officials from these states is that the national interest requires highways to span the wide expanses of large, sparsely populated states that are the source of goods for citizens in the population centers, but the financial resources of those states are often insufficient to construct, maintain, and operate such networks. Two additional arguments are made against the return-to-origin approach. First, as New York transportation officials noted, formulas based on returning contributions to the Trust Fund to the state where they are raised meet neither federal or state transportation goals nor national policy as set forth in ISTEA. If the primary goal of federal apportionment formulas is to return revenues from motor fuel taxes to the place they were earned, these officials questioned whether there was a need for a federal program. Second, state officials have questioned the wisdom of selecting a formula factor that is geared predominantly to fuel use. They argue that such an approach rewards greater use of motor fuel and as such contradicts federal goals of improving air quality and conserving energy. Finally, this approach would not necessarily preclude congressional direction of the use of those funds. Legislation could still specify that the returned funds be used in certain proportions for certain programs, such as the NHS. Moreover, the return could function as (1) a simple return of funds, in which states would be exempt from any or most federal oversight, or (2) a distribution of funds, in which FHWA would oversee the programs for which the funds were returned. Under a return-to-origin approach, we considered three different alternatives, which are summarized in table 3.2. Under the first alternative, $17.8 billion of the total $19.1 billion would be returned to the source. This amount would represent all the funds (including ISTEA’s funds for demonstration projects) distributed to the states in fiscal year 1995, except funds for Interstate Construction. These funds were excluded since the Interstate Construction program’s final apportionment was made at the beginning of fiscal year 1995, and only 14 states and the District of Columbia received funds in the program’s last year. Under this alternative, 24 states would receive more funds than they were apportioned in fiscal year 1995, while the remaining states, along with the District of Columbia and Puerto Rico, would lose funds. The average dollar gain would be $67 million; the average loss would be $58 million. (State-by-state details are presented in app. VI.) Under the second alternative, the total amount of funds ($18.1 billion) apportioned to the states in fiscal year 1995 would be returned, including funds for Interstate Construction funds but excluding those for demonstration projects. Funds for demonstration projects are excluded from this analysis because these funds are not distributed by formula. Rather, the Congress directs how certain funds are to be distributed by requiring that particular projects receive a specified amount of funding. Under this alternative, 27 states would receive more funds than they were apportioned in fiscal year 1995. The average dollar gain would be about $68 million; 23 states, along with the District of Columbia and Puerto Rico, would receive less funding. For these recipients, the average loss would be $73 million. (State-by-state details are provided in app. VII.) Under the third alternative, all funds would be returned to the states, including funds for Interstate Construction and demonstration projects along with other program funding. Thus, this alternative recognizes the full $19.1 billion distributed to the states in fiscal year 1995. Under this alternative, 24 states would gain an average of $86 million, while 26 states along with the District of Columbia and Puerto Rico would lose an average of $74 million. (State-by-state details are provided in app. VIII.) As mentioned previously, the first two formula components discussed above—based on needs and based on returning funds to the source—can be combined. A significant advantage of blending these components is that programs of particular concern (notably, the NHS) could receive special attention through the use of carefully targeted formula factors. In contrast, a return-to-origin approach might be more appropriate for the STP, which already has characteristics that resemble those of a block grant program and which would thus lend itself well to an approach under which funds are returned to the states. For purposes of illustration, the following two hypothetical distributions blend needs-based and return-to-origin approaches along the existing split between the STP and two other primary highway programs—Interstate Maintenance and the NHS. The current funding level for the STP represents about 40 percent of the total funds authorized for these programs. The two alternatives outlined in table 3.3 and described below maintain this distribution of funding. Under the first alternative, 33 states would receive an average of $64 million more than they were apportioned in fiscal year 1995, while 17 states, along with the District of Columbia and Puerto Rico, would lose $111 million on average. Overall, 11 percent of highway funds would be redistributed. (State-by-state details are presented in app. IX.) Under the second alternative, a slightly different redistribution pattern would emerge. The average dollar gain for 30 states would be $60 million; 20 states, along with the District of Columbia and Puerto Rico, would receive less funding than they did in fiscal year 1995. For these recipients, the average loss would be $82 million. In total, about 9 percent of the highway funds would be redistributed. (State-by-state details are presented in app. X.) While many social objectives are probably best addressed through means other than the highway apportionment formula, a portion of highway funds might nonetheless be retained to advance specific objectives and/or to counterbalance some of the potential disadvantages of the principal formula factors. For instance, a certain percentage of funds—10 percent, for example—could be set aside before the remaining funds were distributed to the states. Payments drawn from this set-aside could be used to provide bonuses to advance quality-of-life objectives, to reward improvements in the condition of highway infrastructure above a certain defined floor, and to advance highway safety. These and similar objectives are all laudable; however, constraint in selecting the objectives may be warranted to prevent the dilution of funds that could result from attempting to meet numerous objectives. One approach to distributing the set-aside moneys would be to direct set-aside funds to those states suffering from unique or concentrated needs in certain areas. A prime example of this approach is the existing Congestion Mitigation and Air Quality Improvement (CMAQ) program, which directs funds to states with particularly severe problems in air quality. ISTEA provided CMAQ with a $6 billion authorization— approximately $1 billion annually for 6 years. CMAQ is focused on investment in air quality improvements and provides funds for projects that expand or initiate transportation services that benefit air quality. It is directed to those states that are classified as nonattainment areas for ozone and carbon monoxide (although every state, regardless of its air quality status, is guaranteed an annual minimum apportionment of 0.5 percent of the program’s total funding.) The advantage of such a program is that it focuses funding on precisely those areas with the greatest needs. The disadvantage is that, as occurs with the needs-based formula factors discussed earlier in this chapter, directing funding to states with specific needs can foster a perverse incentive. In the case of the CMAQ program, questions have been raised about the wisdom of essentially rewarding states for their nonattainment status, particularly given that a state loses CMAQ funding if it makes “too much progress” in improving air quality. A second approach to directing set-aside funding towards specific goals is to treat the funds as incentive payments. Incentive payments, as the name implies, do not redress shortcomings, but instead reward desired behaviors or accomplishments. For example, shared set-aside funding could be used to reward states that make notable and measurable improvements in the percentage of the state’s pavement condition rated as “good” under FHWA’s classification system. To emphasize the condition of the nation’s most heavily traveled highways, such rewards could be further refined to focus on improvements in the condition of the NHS. One concern with providing incentives for improvements in highway conditions, however, is that the data from the states on the condition and performance of their roadways are not always reliable, making it more difficult to equitably distribute such incentive payments. In subcommittee hearings for the House Committee on Appropriations in fiscal year 1994, FHWA was questioned on significant swings in the percentage of Interstate pavement rated in poor condition, as illustrated by table 3.4. FHWA explained that the data on the condition of the pavement were based on the use of an index, referred to as the Present Serviceability Index. This index, however, represents a subjective measure of the pavement’s ride quality and can be arrived at by a variety of procedures. Furthermore, FHWA noted that from time to time the states have attempted to improve their estimation of this measure, thus invalidating comparisons with data from previous years. As a result, until the reliability of these data is improved, their use as an indicator for distributing federal highway funds would be suspect and arbitrary. An alternative measure, the International Roughness Index, is a more objective measure of pavement condition (roughness), and FHWA expects this data source to play a more prominent role in the future. In 1993, the most recent year for which data are available on pavement condition, 37 states used the International Roughness Index to measure pavement condition on Interstate highways, while the remaining 13 states continued to rely on the Present Serviceability Index. For other major highways, the proportion of states using the International Roughness Index dropped to about half the states. An FHWA official noted that some states do not use the International Roughness Index because they do not have the money to purchase the necessary equipment. Another impediment to using the International Roughness Index is that the equipment must be operated at a speed of 35-55 miles per hour, thus making in infeasible for use on certain major highways in urban areas because of the presence of other traffic, traffic signals, and other disruptions. Altering the existing formula would undoubtedly cause shifts in the states’ relative shares of annual highway funding. Under a number of the scenarios presented in this chapter, more states would gain funds than would lose funds, but this overall result would be of little comfort to the states whose relative position worsened. Some individual states such as Alaska and Hawaii could lose 50 percent or more of their highway funds under any of the scenarios derived from the approaches based on needs and return to origin. Sudden and significant losses would likely play havoc with the states’ planning processes and programs, and it is doubtful that the affected states would be prepared to cope with losses of this magnitude. In addition, the effect that a change in the formula would have on any state would depend on the percentage of the state’s highway revenue provided by federal funds. Figure 3.1 depicts federal funds as a percentage of the states’ total highway revenue. To help temper the effects of changes in the formula, any new formula might include a component designed to place a cap on the maximum percentage of loss that any individual state would be expected to bear as a result of the changes. For example, a maximum-loss cap of 20 percent might be established. Thus, if a new formula calculation caused a given state’s funding to fall by 50 percent from the level it would otherwise be, the cap would come into effect and funding for the state in question would be reinstated to 80 percent of what the state would otherwise have received. The cap could be either permanent or established for a set period of time during the transition to a new funding amount. Finding the funds to shield the states from severe losses might not be as difficult as it would first appear. If the existing, intricate equity adjustments were replaced with a single, simple cap, the funds devoted to these equity adjustments in fiscal year 1995—$2.8 billion—would more than offset the states’ combined losses in that year under all of the scenarios discussed in this report. The scenario resulting in the greatest adverse impact on the states—alternative 1 of the needs proxy approach—produced a combined loss of $2.4 billion. Alternatively, other categories of funding, such as those supporting highway demonstration projects (currently commanding about $1 billion per year), could be redirected to provide safeguards against sudden losses. Reauthorization of the federal-aid highway program presents the Congress with the opportunity to review the objectives associated with providing federal highway funds and the accompanying formula for distributing the funds. A review of the program’s objectives could be structured to recognize differences among highways and the federal role associated with important highways, such as those included in the NHS. There are no perfect factors that embrace the breadth of ISTEA’s diverse objectives as well as the states’ different needs. Regardless of the factors chosen, some states will experience disadvantages that the construction of a formula may not be able to compensate for. Which states are negatively affected changes with the factors chosen and the percentage weights assigned to various factors. Moreover, as noted in chapter 2, DOT has proposed changes in the system for delivering grants. Whether DOT’s proposed changes are adopted or other scenarios for delivering grants are developed, the Congress will have to reach a consensus on the national objective(s) that are critical for the highway program to address; decide whether a formula is the appropriate vehicle for addressing these objective(s); and for those objectives that the formula can best address, determine the most representative factors and corresponding weight to be assigned to those factors. If an alternative formula is adopted for distributing highway funds in the future and this formula would result in dramatic funding losses for certain states, ways could be considered to reduce the magnitude of the losses—by, for example, providing for a cap on the maximum percentage of loss that any one state would be expected to bear. | Pursuant to a legislative requirement, GAO reviewed the formula for distributing federal highway funds, focusing on the: (1) relevancy of the data used for the formula; (2) major funding objectives implicit in the formula; and (3) implications of alternative formula factors for achieving these objectives. GAO found that: (1) the federal highway funding formula is complex and cumbersome; (2) the underlying data and factors used in the formula are to a large extent irrelevant, since funding outcomes are essentially predetermined; (3) annual combined funding for the four largest highway programs is fixed throughout the 6-year life of the Intermodal Surface Transportation Efficiency Act (ISTEA); (4) some of the factors used in formula calculations are based on outdated information, are unresponsive to changing conditions, and often do not reflect the highway system's utilization; (5) equity adjustments increase many states' final funding levels; (6) funding for demonstration projects is not determined by formula; (7) ISTEA objectives include maintaining and improving the highway infrastructure, returning Highway Trust funds to the states where the revenue was generated, advancing selected goals, and safeguarding the states' historical funding shares; and (8) a combination of objectives based on states' needs and resources could form the basis for a new formula, but any new formula is likely to change the states' highway funding levels. |
DOD reported on the potential threats that insiders could pose in April 2000 when the department issued an integrated process team report with 59 recommendations for action to mitigate insider threats to DOD information systems. After the unauthorized, massive disclosures of classified information in 2010, Congress required the Secretary of Defense to establish a program for information sharing protection and insider-threat mitigation for DOD information systems. Additionally, the President in October 2011 ordered structural reforms to safeguard classified information and improve security of classified networks that were to be consistent with appropriate protections for privacy and civil liberties. E.O. 13587, among other things, established an interagency Insider Threat Task Force, known as the National Insider Threat Task Force, discussed below. In November 2012, the President issued the National Insider Threat Policy and Minimum Standards for Executive Branch Insider Threat Programs, which identified six minimum standards that executive-branch agencies were required to include in their insider-threat programs. These standards include (1) designation of senior official(s); (2) information integration, analysis, and response; (3) insider-threat program personnel; (4) access to information; (5) monitoring user activity on networks; and (6) Each minimum standard has multiple employee training and awareness.associated tasks. For more information on these minimum standards and associated tasks, see appendix II. As part of the minimum standards, departments and agencies were required to issue their own insider-threat policies and plans. DOD issued its insider-threat program policy in September 2014. threat program policy requires each of the department’s components to issue respective insider-threat policies and implementation plans. Figure 1 shows the relationship between the White House, DOD, and DOD component actions to issue policies or plans. DOD Directive 5205.16, The DOD Insider Threat Program (Sept. 30, 2014). tailor programs to meet their particular needs.to task-force officials, the task force conducts independent assessments of agency programs as required by E.O. 13587. Senior Information Sharing and Safeguarding Steering Committee (co-chaired by the National Security Staff and the Office of Management and Budget and includes representatives from executive departments and agencies, including DOD) is to coordinate priorities for sharing and safeguarding classified information on computer networks. According to E.O. 13587, the committee is to receive copies of the self-assessments that each agency is to conduct—commonly referred to as the Key Information Sharing and Safeguarding Indicators assessment—and copies of the independent assessments that the National Insider Threat Task Force and National Security Agency are to conduct. National Security Agency, as co-Executive Agent for Safeguarding Classified Information on Computer Networks, is to conduct independent assessments of agency compliance with safeguarding policies and standards as required by E.O. 13587. Departments and agencies, including DOD, are to establish insider- threat programs and perform self-assessments of compliance with established standards and priorities. Various DOD organizations, as described in table 1, have responsibilities related to insider threats, specifically the protection of DOD classified information and systems. DOD has structured its insider-threat program to include four broad types of insider threats, including cyber threats. According to an OUSD (Intelligence) insider-threat program briefing, the DOD organizations responsible for each of these threat areas are to share information to help prevent and mitigate insider threats (see fig. 2). DOD and the six selected components we reviewed have begun incorporating the minimum standards called for in E.O. 13587 into insider- threat programs to varying degrees to protect classified information and systems. Specifically, two components have established insider-threat programs that incorporate all six of the minimum standards. Conversely, the other components have taken action but have not addressed all tasks associated with the six minimum standards. For example, one insider- threat program has addressed six of the seven tasks associated with the minimum standard of “Designation of Senior Official(s).” However, that program has not completed the task that requires their senior official to submit to the agency head an implementation plan and an annual report that identifies annual accomplishments, resources allocated, insider- threat risks to the agency, recommendations and goals for program improvement, and major impediments or challenges. Similarly, all of the components we reviewed reported that they had addressed the task included in the “Monitoring User Activity on Networks” standard that states that insider-threat programs should include the technical capability to monitor user activity on classified networks. However, the means by which the selected components addressed this task varied. Specifically, according to component officials, one component was conducting more enhanced user activity monitoring for a small pilot group, and two components were conducting widespread enhanced monitoring of user activity. Two components reported that they were using an application that provides network activity information to inform According to the National Insider Threat Task user activity monitoring.Force, this application contributes to insider-threat programs but does not provide full user activity-monitoring capability. Table 2 describes our evaluation of the extent to which DOD and the six selected components had incorporated minimum standards into insider-threat programs as of January 2015. As of January 2015, DOD officials indicated that the selected components continue to take steps to develop their programs and incorporate the minimum standards into their programs. For example, DOD has drafted an implementation plan—a task in the “Designation of Senior Official(s)” minimum standard—that identifies the key milestones to incorporate the minimum standards into the department’s insider-threat program. The implementation plan also requires the components to issue their own implementation plans as they establish insider-threat programs that incorporate all minimum standards in accordance with DOD’s insider- threat program directive. According to DOD officials, DOD plans to issue the department’s implementation plan in spring 2015. Additionally, according to National Insider Threat Task Force officials, the Senior Information Sharing and Safeguarding Steering Committee has decided to adopt a risk-based approach to how departments and agencies incorporate the minimum standards. Lower-risk organizations, which could include some DOD components, will not be required to incorporate the minimum standards to the same extent as higher-risk organizations. The officials told us that they have not yet determined which DOD components might be characterized as lower-risk, and the committee is continuing to study the standards to determine what will be required of lower-risk organizations. In addition to the minimum standards issued by the President, DOD guidance and reports identify elements that could enhance DOD’s efforts to protect classified information and systems. These elements—which are required to support DOD’s broader efforts in areas such as cybersecurity, counterintelligence, and information security—are also identified in executive-branch policy and recommended in DOD and independent studies related to insider threats. For example, DOD Instruction 5240.26, DOD’s 2000 insider-threat mitigation report, and Carnegie Mellon Software Engineering Institute’s insider-threat guide state that DOD components should develop a baseline of normal users’ activities. Also, Carnegie Mellon Software Engineering Institute and a White House review group—both of whom have recommended actions to address insider threats—stated that agencies, such as DOD, should develop risk-based analytics to detect insider-threat activity. As shown in figure 3, we developed a framework of these key elements by program phase based on our analysis of the minimum standards, DOD guidance, executive-branch policy and reports, and other guidance. DOD and the six components we reviewed have incorporated some of the 25 recommended key elements we identified from DOD guidance and reports and independent studies to mitigate insider threats. Specifically, we found that some components have incorporated key elements such as conducting internal spot checks; instituting internal controls and security controls; performing risk-based analytics; and taking personnel action. However, DOD and the six components have not incorporated all of the 25 key elements and for the ones they have incorporated, they have not done so consistently. For example: Institute and communicate consequences. DOD Instruction 8500.01 directs DOD components to ensure personnel are considered for sanctions if they compromise, damage, or place at risk DOD information. Additionally, Carnegie Mellon Software Engineering Institute’s insider-threat guide states that agencies should have policies and procedures in place that specify the consequences of particular policy violations. We found that one component published a table of penalties, which is a guide for assessing the appropriate penalty for misconduct. A second component’s policy had procedures for communicating the consequences of disciplinary actions to insider- threat personnel; however, the other components we reviewed did not have similar information in their insider-threat program policies. Further, two components reported that their program processes and procedures were not fully documented, and officials from another component cited an example of component officials not instituting consequences when an incident occurred. Develop a baseline of normal activity. directs DOD components to report anomalies, such as changes in user behavior. DOD’s 2000 insider-threat mitigation report recommended that DOD create a list of system and user behavior attributes to develop a baseline of normal activity patterns. A baseline of normal activity identifies a user’s normal network activity. Additionally, according to Carnegie Mellon Software Engineering Institute’s insider-threat guide, to detect anomalies in network activity, an organization must first create a baseline of normal network activity. Three components have taken action to identify a baseline of normal user activity, but the others have not. Share information as appropriate. E.O. 13587 states that agencies should provide policies for sharing information both within and outside of the federal government. Component officials stated there are informal processes for sharing information within DOD; however, the component officials stated that they were unaware of a process for sharing information outside of DOD. Develop, disseminate, and incorporate best practices and lessons learned. DOD Instruction 5240.26 calls for the identification and dissemination of best practices across DOD in support of DOD insider-threat programs. Additionally, DOD’s 2000 insider-threat mitigation report recommended that DOD develop a database of lessons learned from insider-threat incidents. The report stated that not having such information severely hampers understanding of the magnitude of the insider-threat problem and the development of solution strategies. Officials at five components stated that while they sometimes develop and share best practices and lessons learned as a matter of practice, they do not have or use a formalized process of developing, disseminating, and incorporating best practices and lessons learned, such as solutions to vulnerabilities, in their insider- threat programs. When we discussed the key elements framework with DOD officials, researchers specializing in insider threats, and a private sector insider- threat program official, they agreed that it identified elements that would help DOD components develop and strengthen their insider-threat programs. However, DOD officials stated that they would need supplemental planning guidance that helps them identify actions, such as the key elements, beyond the minimum standards that they should take to enhance their insider-threat programs. The current DOD directive does not contain additional guidance for implementing key elements of an insider-threat program beyond the minimum standards. According to DOD component officials, the directive repeats the minimum standards but does not provide DOD component officials with sufficient guidance for incorporating recommended key elements to enhance their insider-threat programs. Additionally, the draft DOD implementation plan provides guidance on the minimum standards but not recommended key elements. In January 2015, DOD officials stated that they planned to issue supplemental guidance to assist components in implementing insider- threat programs. Issuing such guidance would be consistent with federal standards for internal control, which state that organizations need information to achieve objectives, and that information should be communicated to those who need it within a time frame that enables them to carry out their responsibilities. Guidance identifying actions beyond the minimum standards could assist components in enhancing their insider-threat programs and further enhance the department’s efforts to protect its classified information and systems. DOD has conducted self-assessments of its insider-threat program; additionally, independent entities have assessed DOD components’ compliance with relevant policies and standards. E.O. 13587 and the national insider-threat policy require agencies to perform self- assessments that evaluate their level of organizational compliance with the national insider-threat policy and minimum standards. To meet this requirement, DOD conducts quarterly self-assessments—commonly referred to as the Key Information Sharing and Safeguarding Indicators assessment—and evaluates the extent to which the department is addressing 63 key performance indicators. These 63 key performance indicators address topics such as the implementation of the department’s insider-threat program, the management and monitoring of removable media, and the implementation of a public-key infrastructure to reduce user anonymity on classified networks. In its February 2015 quarterly self-assessment, DOD reported that it addressed all of the management and monitoring indicators for removable media. For example, DOD reported that it monitors computer systems and uses a tool to alert appropriate officials when individuals try to write to removable media such as CDs or USB devices. However, DOD also reported that it had not fully addressed other indicators, including those associated with the department’s insider-threat program. For example, DOD reported that it had not issued its program-implementation plan. DOD officials acknowledged that the department had not completed the tasks associated with the 63 key performance indicators and told us that the department will continue to focus on these efforts until they have been addressed. DOD has conducted these self-assessments for the department, as required. However, we found that these assessments reflect either the department’s overall progress or limited information regarding actions taken by individual DOD components. This information is limited because the current assessments do not reflect the extent to which the components have accomplished tasks associated with the 63 key performance indicators. According to the draft DOD insider threat program implementation plan, DOD components will be expected to submit self-assessments to the Under Secretary of Defense for Intelligence in 2015. In addition to its self-assessments, in 2013 DOD updated its Command Cyber Readiness Inspections to evaluate whether units had incorporated insider-threat security measures identified in a 2013 U.S. Cyber Command tasking order. U.S. Cyber Command officials indicated that the command selects units for inspection according to risk factors such as threat information and inspection histories. As of July 2014, DOD had inspected one of the six components included in the scope of our review. According to the inspection report, this component was complying with the security measures cited in the 2013 tasking order. U.S. Cyber Command officials stated that DOD intends to update the inspections in 2015 to include additional security measures developed in response to a 2014 U.S. Cyber Command tasking order. In addition to DOD’s internal assessments, the National Security Agency and the National Insider Threat Task Force separately conduct independent assessments of DOD’s protection of classified information and systems, as required by E.O. 13587. January 2015, the National Security Agency had assessed one DOD component since E.O. 13587 was issued in 2011. The focus of the assessment was to identify vulnerabilities, assess compliance, and assist the component with the implementation of safeguarding policies and standards in support of E.O. 13587. The assessment report identified best practices, vulnerabilities, and recommendations to resolve technical security issues. The National Security Agency conducts these assessments in its independent role as co-executive agent for safeguarding classified information on computer networks. In accordance with the executive order, the National Insider Threat Task Force has assessed four DOD components’ compliance with insider- threat policies and minimum standards. According to these assessments, the task force compares the component’s policies and practices with the minimum standards. The assessments note where the component has taken action to address minimum standards and associated tasks, and also make recommendations to help the components develop their programs and address the standards. For example, in its assessment of one component, the National Insider Threat Task Force complimented the component’s system to centralize access to unclassified employee records, but recommended that the component begin issuing an annual report to its director, which is a task associated with the “Designation of Senior Official(s)” standard. Section 922 of the National Defense Authorization Act for Fiscal Year 2012 requires that DOD complete a continuing analysis of gaps in security measures and of technology, policies, and processes that are needed to increase the capability of its insider-threat program to address these gaps, and that DOD report to Congress on implementation of the requirement. Although DOD reported to Congress in March 2013 that OUSD (Intelligence) was conducting a survey to serve as a baseline foundation for a continuing analysis of gaps, in October 2014 DOD officials told us that they suspended this baseline survey and did not otherwise complete a continuing analysis of gaps.have allowed DOD to define existing insider-threat program capabilities; identify gaps in security measures; and advocate for the technology, policies, and processes necessary to increase capabilities in the future. According to the officials, after consulting DOD’s Cost Assessment and Program Evaluation office about the process for conducting such a survey across the department, the department believed such an effort would not be feasible due to financial and personnel limitations. The department has not taken action to fulfill this statutory requirement since then. OUSD (Intelligence) officials stated that they believe the department has addressed the intent of the statutory requirement through the previously discussed assessments—DOD’s quarterly self-assessments, DOD’s Command Cyber Readiness Inspections, and the National Security Agency’s independent assessments. However, DOD has not evaluated and documented the extent to which these assessments define existing insider-threat program capabilities; identify gaps in security measures; and advocate for the technology, policies, and processes necessary to increase capabilities in the future, as is required by law. Similarly, DOD officials stated that the department has not informed Congress that it did not complete the actions identified in its 2013 report to Congress, because they believed the legislation required only the 2013 report. Further, officials from OUSD (Intelligence)—which supports DOD’s senior official overseeing insider-threat programs—told us they do not review the results of the National Security Agency assessments or Command Cyber Readiness Inspection reports, though DOD Directive 5205.16 directs the senior official to monitor insider-threat program implementation progress. Without evaluating and documenting the extent to which current assessments provide a continuing analysis of gaps, reporting to Congress on the results of this evaluation, and OUSD (Intelligence) reviewing the overall results of these self- and independent assessments, the department will not know whether their capabilities for insider-threat detection and analysis are adequate and fully address the statutory requirements. National-level security guidance states that agencies should assess their For example, the risk posture as a part of their insider-threat programs.National Insider Threat Task Force’s guide states that agencies should identify their critical assets and then assess the risk to those assets. Also, the Committee on National Security Systems’ Directive on Protecting National Security Systems from Insider Threat requires the alignment of departments and agencies’ cybersecurity protections—which are part of an insider-threat program’s protective capabilities—with the assets, threats, and vulnerability assessments as determined by risk assessments. We found that DOD has not incorporated risk assessments into its insider-threat programs. DOD officials stated that they include insider threats in other risk assessments; however, these assessments are technical in nature and focus on the vulnerabilities of individual systems. These individual system risk assessments do not provide insider-threat program officials with complete information to make informed risk and resource decisions about how to align cybersecurity protections. For example, the individual system risk assessments do not identify or consider the different types of insider threats (e.g., foreign intelligence collection, individuals with a personal agenda, or unintentional actions); insider-threat vulnerabilities; or different levels of consequence that each component or organization could suffer if an insider were to exploit the vulnerability; nor do they address the overall risk to the insider-threat program. Rather than conducting a formal risk assessment for the insider- threat program, DOD CIO officials stated that they reach out to DOD component officials in an effort to maintain awareness of the department’s overall insider-threat capabilities. We found that this communication provides OUSD (Intelligence) and DOD CIO a status update of the component’s progress in achieving key performance indicators for the insider-threat program but does not include identification of component’s critical assets and risks to them, as described in the National Insider Threat Task Force’s guide. elements of their mission that are essential to national security and that, if damaged, stolen, or otherwise exploited, would have a damaging effect on the agency, its mission, and national security. National Insider Threat Task Force, 2014 Guide to Accompany the National Insider Threat Task Force Policy and Minimum Standards. DOD officials stated that they believe the department has addressed the intent of a risk assessment by other means, including the Command Cyber Readiness Inspections and the National Security Agency’s independent assessments of DOD components. Officials of the Command Cyber Readiness Inspection program told us that the inspection process currently includes threat assessments, a risk-indicator matrix, and a risk assessment to prompt organizations to consider threats and risk to their missions and operations resulting from vulnerabilities found on their networks. However, these inspections do not focus on the overall component but rather on specific units within a component. Additionally, the National Security Agency told us that its independent assessments would not include all information needed for a true risk assessment. Finally, OUSD (Intelligence) officials stated that they do not currently review the results of the National Security Agency assessments or Command Cyber Readiness Inspection reports, as previously discussed. Therefore, the senior-level official does not know which specific types of risk the department is incurring. DOD officials stated that the department and its components have not incorporated risk assessments as part of their insider-threat programs in part because they have not fully implemented the department’s insider- threat program. We also found that the DOD components we reviewed have not assessed risks because DOD has not provided guidance directing components to incorporate risk assessments into their respective insider-threat programs. Until DOD provides supplemental guidance directing components to incorporate risk assessments into their insider-threat programs, components may not assess risk and DOD will not be able to determine whether current security measures are adequate or whether proposed security measures would address a component’s level of risk. Also, if DOD and its components do not align insider-threat security measures with threats, as required by the directive on national security systems, decision makers may lack information needed to make informed judgments. To help protect classified information and systems from future insider threats, in the technical area, officials from three of the six DOD components we reviewed told us that they are hoping to obtain or improve analytic tools that allow the component to identify anomalous These analytic tools behavior that could indicate insider-threat activities.would obtain data through monitoring of user activity. Specifically, officials from two DOD components told us that they currently do not have these tools, but hope to obtain them in the future. Officials from another DOD component that does have such a tool told us that the component hopes to obtain an enhanced version that will allow the tool to analyze user behavior across systems of different classification levels (i.e., across the unclassified network, secret network, and top-secret network). According to National Insider Threat Task Force officials, these tools can also merge user activity-monitoring data with other sources of data to provide analysts with additional information. In the policy area, component officials we interviewed also identified several actions to better protect against insider threats. DOD Insider Threat Management and Analysis Center. Officials from three of the six DOD components we reviewed told us that they need DOD to make additional decisions regarding the proposed Defense Insider Threat Management and Analysis Center. According to an OUSD (Intelligence) briefing, DOD developed the concept for such a center based on a common recommendation that was identified in a 2012 Defense Science Board report and a 2013 Washington Navy Yard shooting after-action report; a similar recommendation was also identified in a 2010 Fort Hood shooting after-action report. According to DOD’s Washington Navy Yard Task Force Implementation Plan, the center will consist of cross-functional representatives that assess risk, recommend intervention or mitigation, and oversee the completion of case action on threats that insiders may pose to DOD personnel, DOD missions and resources, or both. While this implementation plan identifies general efforts that the center could take, DOD has not issued a concept of operations and other planning documents that identify the center’s actual functions, scope, level of involvement expected from the components, level of DOD involvement, and depth of analysis to be completed at the center, and the relationship between the center and the services’ existing threat-analysis centers. Information sharing. Officials from two of the six components we reviewed cited the need for clear policies on when and how components can share information about individuals who are suspected or confirmed of being an insider threat. Similarly, officials said that the components need clear policy about sharing suspicious information that could be occurring across DOD components and other federal agencies. Continuous evaluation. Officials from one of the six components we reviewed told us that the components need policy that addresses continuous evaluation. Continuous evaluation is the practice of reviewing background information at any time during an individual’s period of eligibility for access to classified information to determine whether the individual continues to meet the requirement for eligibility. According to DOD’s Washington Navy Yard Task Force Implementation Plan, continuous evaluation will leverage automated records checks of personnel with access to DOD facilities or classified information. These automated records checks of authoritative commercial and government data sources (e.g., criminal, financial, or credit records) will flag issues of personnel security concern. These checks are to supplement existing security processes, such as self- reporting, to more quickly identify and prioritize information of adjudicative relevance or adverse events that occur between periodic reinvestigations. According to DOD’s draft insider-threat program implementation plan, as of October 2014 DOD was still defining the organizational construct and concept of operations for continuous evaluation. DOD plans to provide this information in 2015. DOD is not consistently collecting the information to manage and oversee insider-threat programs that could assist the Under Secretary of Defense for Intelligence in providing oversight and making recommendations to counter insider threats, such as the technical and policy changes identified above. DOD’s insider-threat program directive requires that the Under Secretary of Defense for Intelligence provide management, accountability, and oversight of the department’s insider-threat program, which includes the components’ programs. As part of these responsibilities, the Under Secretary of Defense for Intelligence is to oversee departmental capabilities and resources to counter insider threats, and make recommendations on program improvements and resources. Additionally, DOD’s defense security enterprise directive requires that the Under Secretary of Defense for Intelligence coordinate with the DOD CIO to establish enterprise investment goals informed by security-related efforts such as insider-threat initiatives. OUSD (Intelligence) officials stated that they reach out to components on an as- needed basis to obtain information about insider-threat resources. However, according to the officials, they do not have a process to consistently collect information that identifies components prioritized needs, such as technical and policy needs for the future, and as a result face difficulties identifying component needs and comparing them against overall goals and strategy. Without collecting information from DOD components, the Under Secretary of Defense for Intelligence may face challenges fulfilling these management responsibilities. OUSD (Intelligence) and DOD CIO officials acknowledged that information from the components’ about technical and policy needs would help the Under Secretary of Defense for Intelligence establish investment goals and make recommendations on program improvements and resources. According to OUSD (Intelligence) officials, they do not have a process to collect information from the components to support management and oversight duties and inform resource recommendations and investment goals because DOD has not dedicated a program office that is focused on oversight of the insider-threat program. Specifically, while DOD has designated the Under Secretary of Defense for Intelligence as the department’s senior insider-threat program official, officials stated that DOD has not identified a program office to execute the day-to-day responsibilities associated with this position and the program is instead currently supported within an office whose mission is policy, rather than management, oriented. Identification of a program office is consistent with federal standards for internal control and Office of the Director of National Intelligence guidance. For example, federal standards for internal control call for an organizational structure that provides a framework to achieve agency objectives, including delegation of authority and responsibility for operating activities. Director of National Intelligence guidance states that fully functional headquarters-level counterintelligence programs should include at least a program manager and supporting program staff. Without identifying a program office to support the Under Secretary of Defense for Intelligence’s responsibilities in managing and overseeing DOD and components’ insider-threat programs, DOD may not be able to collect all information about DOD components’ technical and policy needs and could face challenges in establishing goals, and recommending resources and improvements to address insider threats. The recent disclosures of classified information by insiders have damaged national security, potentially placed the lives of military service members at risk, and highlighted the importance of preventing or mitigating future threats to DOD’s classified information and systems. DOD’s April 2015 cyber strategy reflects the importance of mitigating insider threats to achieve the department’s goal of defending DOD’s information network, securing DOD data, and mitigating the risk to DOD DOD and its components are taking steps to address these missions.threats by implementing programs that incorporate minimum standards. However, DOD components have not taken action to incorporate other key elements into their insider-threat programs because DOD has not issued guidance that identifies actions beyond the minimum standards that components should take to enhance their insider-threat programs. Such guidance would assist components in developing and strengthening insider-threat programs and better position the department to safeguard classified information and systems. Gap and risk assessments allow DOD components to regularly assess the dynamic threat, vulnerability, and consequences associated with protecting classified information and systems from insider threats. While DOD has assessed aspects of its insider-threat program, it has not evaluated or documented the extent to which these assessments provide a continuing analysis of gaps as required by statute and has not incorporated risk assessments into insider-threat programs; nor have the results of the existing assessments been provided to DOD’s senior insider-threat official. Without such an analysis of gaps and risk assessments—and without the Under Secretary of Defense for Intelligence reviewing the results—DOD will face challenges understanding the extent to which its mitigations address current and evolving threats that insiders pose, and will be hampered in making more- informed management and resource decisions. In addition, as the threat evolves, DOD will need to address future technical and policy changes. However, DOD is not consistently collecting information about future technical and policy changes because it has not established an insider-threat program office. Without designating a program office dedicated to the oversight role, DOD may not ensure the collection of all information about components’ needs and could face challenges in establishing goals, and recommending resources and improvements to address insider threats. To further enhance the department’s efforts to protect its classified information and systems from insider threats, we recommend that the Secretary of Defense take the following four actions. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Intelligence to take the following actions: In planned supplemental planning guidance to be developed, identify actions beyond the minimum standards that components should take to enhance their insider-threat programs. Evaluate and document the extent to which current assessments provide a continuing analysis of gaps for all DOD components; report to Congress on the results of this evaluation; and direct that the overall results of these self- and independent assessments be reviewed by the Office of the Under Secretary of Defense for Intelligence. Provide DOD components supplemental guidance that directs them to incorporate risk assessments into their insider-threat programs. We also recommend that the Secretary of Defense take action to do the following: Identify an insider-threat program office to support the Under Secretary of Defense for Intelligence’s responsibilities in managing and overseeing DOD and components’ insider-threat programs. DOD provided written comments on a draft of this report and these comments are reproduced in appendix IV. DOD concurred or partially concurred with all four of our recommendations. The Departments of Homeland Security and Justice and the Office of the Director of National Intelligence reviewed a draft of this report but did not provide any comments. DOD agreed with our recommendation to identify in supplemental guidance actions beyond the minimum standards that components should take to enhance their insider-threat programs. DOD stated that it will publish a detailed implementation plan in 2015 to assist components in implementing multiple actions required in all insider-threat programs. Issuing an implementation plan is a positive step and one required by the minimum standards. However, as stated in our report, the draft implementation plan that we reviewed focused on actions that DOD would take to implement the minimum standards and did not provide DOD components additional information about other key elements that component officials told us would be helpful. We therefore believe that DOD needs to update its draft implementation plan before it is issued to include guidance beyond the minimum standards, or issue this guidance in another form. This will ensure that DOD components will be better positioned to enhance their insider-threat programs and the department will be better positioned to protect its classified information and systems. DOD partially agreed with our recommendation to evaluate and document the extent to which current assessments provide a continuing analysis of gaps for all DOD components, report to Congress on the results of this evaluation, and direct that the overall results of these assessments be reviewed by OUSD (Intelligence). In its comments, DOD first stated that it analyzes security gaps each quarter through its self-assessments, which identify gaps in program capabilities. While these assessments can provide DOD and its components information required under E.O. 13587, DOD did not indicate whether it would evaluate and document whether those assessments provide a continuing analysis of gaps as identified in Section 922 of the National Defense Authorization Act for Fiscal Year 2012. Such an evaluation is necessary to determine whether DOD is meeting the statutory requirement to complete a continuing analysis of gaps in security measures and of technology, policies, and processes that are needed to increase the capability of DOD’s insider-threat program to address these gaps. We believe such an evaluation is prudent since, as we stated in the report, the information from the self-assessments and independent assessments cited by DOD is sometimes limited. Therefore, we continue to believe that DOD should take steps to evaluate and document the extent to which these current assessments provide the same information as the statutorily-required analysis of gaps in order to determine the adequacy of DOD’s insider-threat detection and analysis capabilities. Second, DOD stated that it met the congressional reporting requirement with its 2013 report, which did not require additional reporting. However, as we stated in the report, DOD did not complete the actions it described in the 2013 report and thus has not provided Congress with current information that would assist it in making informed decisions about funding to address gaps in security measures. Therefore, we continue to believe that the department should report to Congress on the results of its evaluation of current assessments, which identify gaps in security measures under its program for information-sharing protection and insider-threat mitigation. DOD also stated that the self-assessments and independent assessments of component insider-threat programs have begun, and agreed that these assessments will be provided to OUSD (Intelligence) for review upon completion. DOD agreed with our recommendation to provide components with supplemental guidance directing them to incorporate risk assessments into their insider-threat programs. DOD stated that its forthcoming implementation plan will require components to employ a process to identify critical assets and assess the components’ risk posture. DOD also stated that other risk assessments will be considered and integrated with insider-threat risks. We agree that incorporating risk assessments will assist component leadership in making informed judgments and better enable them to align security measures with threats. DOD partially agreed with our recommendation to identify an insider- threat program office to support the Under Secretary of Defense for Intelligence’s responsibilities in managing and overseeing insider-threat programs. DOD stated that it has chartered a study to examine the feasibility and associated requirements for establishing a separate DOD insider-threat program office. DOD expects to complete this study by July 2016. Our recommendation does not state that DOD needs to establish a separate program office, but rather that DOD should identify a program office to support the Under Secretary’s responsibilities. Therefore we would hope that as part of its study DOD would assign responsibility for this oversight to a program office. In its comments, DOD also referred to steps it has taken to establish the Defense Insider Threat Management and Analysis Center and described some of the center’s future capabilities. However, as we note in our report, DOD components described their need for policy about the center, and DOD has not yet issued a concept of operations and other planning documents that identify the center’s actual functions, scope, and relationships with existing service threat-analysis centers. Once DOD implements our recommendation and identifies an insider-threat program office, the Under Secretary of Defense for Intelligence will be better positioned to collect information from the components about their prioritized technical and policy needs for the future, such as policy regarding the Defense Insider Threat Management and Analysis Center. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and Homeland Security; the Attorney General of the United States; and the Director of National Intelligence. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Joseph W. Kirschbaum at (202) 512-9971 or [email protected] or Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To evaluate the extent to which the Department of Defense (DOD) has implemented an insider-threat program that incorporates minimum standards and key elements to protect classified information and systems, we evaluated initiatives that DOD had established and policy and guidance that identify responsibilities within the department to address the threat that insiders pose to classified information and systems. We selected a nonprobability sample of six DOD components to assess implementation efforts at the component level. These six components include three combat support agencies; one military service; one combatant command; and one service sub-command. We selected these six components based on several factors including their specific roles in supporting DOD networks, prior insider-threat incidents, and reported progress in implementing insider-threat programs. In order to avoid duplication with an ongoing DOD Inspector General evaluation, we included only one military service. While not generalizable, the information we obtained from these selected components provided insight about steps components are taking and challenges they are encountering. We developed a questionnaire based on our research objectives, the six minimum standards issued in 2012 by the President, and industry leading practices, and solicited responses from the six selected components. We administered the questionnaire and collected responses from all six selected components and the Office of the Under Secretary of Defense for Intelligence (OUSD ), and conducted follow-up meetings as needed. We also collected policies and guidance related to the responses and programs. We then used the questionnaire responses and information obtained from meetings and document reviews to assess each component’s insider-threat program implementation and content. We reviewed the questionnaire responses to ensure the responses were consistent with the information we obtained. Any discrepancies were documented and follow up was conducted as necessary. Using a scorecard methodology, we developed a rating system to assess the components against the minimum standards to determine the extent to which the minimum standards were incorporated into component insider- threat programs. We used three ratings for assessing the incorporation of each minimum standard: addressed all tasks associated with minimum standard, addressed at least one task, and did not address tasks. We rated components that answered yes to all questions related to that minimum standard and its associated tasks as addressed all tasks. We rated components that answered yes to one or more question related to that minimum standard and its associated tasks as addressed at least one task. We rated components that did not answer yes to at least one question related to a minimum standard and its associated tasks as having not addressed any of the tasks. Two analysts independently assessed and assigned a rating to each standard and then compared their independent ratings, discussed any differences, and determined a final rating. We then compiled the final ratings into a scorecard graphic. An independent analyst reviewed our analysis and ratings for accuracy and consistency. Additionally, to identify 25 key elements for a framework applicable to insider-threat programs, we analyzed Executive Order 13587 (E.O. 13587), the national insider-threat policy and minimum standards, DOD guidance and reports, Committee on National Security Systems guidance, a set of leading practices that the National Insider Threat Task Force recommends, practices that other federal agencies and private industry use, and a list of essential elements that a group of private-sector and U.S. government analysts created.consulted by key element, see appendix III. We then organized this information into a framework of 25 key elements. We based these elements upon the principles that we identified, but note that this framework is not necessarily a comprehensive list since other principles may exist that did not surface based on our inquiry that could benefit insider-threat programs. We discussed the framework with DOD and For a list of the resources we private-sector officials and incorporated comments and changes as appropriate. We also met with officials from the Department of Homeland Security and Department of Justice and obtained information about their insider-threat programs because E.O. 13587 assigns them roles for insider threats. While not generalizable, the information we obtained provided insight about the implementation of insider-threat programs at federal agencies other than DOD. To evaluate the extent to which DOD and others have assessed DOD’s insider-threat program to protect classified information and systems, we obtained copies of DOD’s quarterly self-assessments from December 2013 through February 2015, in which DOD reported its progress in complying with minimum standards. We compared the current DOD assessment efforts to those described in E.O. 13587 and national policy, and we interviewed officials from DOD and its components about their self-assessment process and results. We did not independently verify the accuracy of the self-assessments since it was beyond the scope of this review. We met with U.S. Cyber Command and obtained information about Command Cyber Readiness Inspections, including a list of organizations inspected and the overall results related to insider threat. We did not independently verify the accuracy of this information since it was beyond the scope of this review. We also met with officials from the National Insider Threat Task Force and National Security Agency who are involved in conducting independent assessments, confirmed that they have assessed some DOD components, and obtained and reviewed copies of the assessments. To determine the extent to which DOD conducted the continuing analysis of gaps in its insider-threat program required by National Defense Authorization Act for Fiscal Year 2012, we obtained and reviewed DOD’s 2013 report to Congress in which it described its plan for conducting a continuing analysis, and interviewed officials about the current status of the analysis. To determine the extent to which DOD incorporated risk assessments in its insider-threat program, we reviewed DOD, Committee on National Security Systems, and National Insider Threat Task Force guidance related to the assessment of an agency’s risk posture. We interviewed OUSD (Intelligence) and DOD Chief Information Officer (DOD CIO) officials about the extent to which DOD conducted risk assessments related to insider-threat programs, and asked components about the extent to which they conducted risk assessments that would inform insider-threat programs. To evaluate the extent to which DOD has identified any technical or policy changes to protect its classified information and systems from insider threats in the future, we focused on initiatives to be implemented beginning in 2015 and those initiatives not included in DOD’s existing insider-threat guidance. We did not include initiatives that are being assessed in-depth by a related GAO engagement. We collected information about initiatives through the questionnaire we developed for the six selected components, and through interviews with component officials. The questionnaire and interviews were used to identify any future technical and policy changes to address threats to component information and information systems. We also asked component officials about their process for prioritizing and planning for initiatives. We then interviewed officials from OUSD (Intelligence) and DOD CIO about how the department is collecting information about these initiatives and using the information to inform resource recommendations and program improvements. We compared these responses to DOD guidance on responsibilities for insider-threat programs and the defense security enterprise, federal standards for internal control, and Office of the Director of National Counterintelligence guidance. We did not evaluate the initiatives themselves or assess each initiative’s relative priority or efficacy. We obtained relevant data and documentation and interviewed officials from components within the Department of Defense, Department of Homeland Security, Department of Justice, and the Office of the Director of National Intelligence’s National Insider Threat Task Force. We also met with representatives from Carnegie Mellon University Software Engineering Institute – CERT Insider Threat Center and Lockheed Martin. We conducted this performance audit from May 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DESIGNATION OF SENIOR OFFICIAL(S): Each agency head shall designate a senior official or officials, who shall be principally responsible for establishing a process to gather, integrate, and centrally analyze, and respond to Counterintelligence (CI), Security, Information Assurance (IA), Human Resources (HR), Law Enforcement (LE), and other relevant information indicative of a potential insider threat. Senior Official(s) shall: 1. Provide management and oversight of the insider threat program and provide resource recommendations to the agency head. 2. Develop and promulgate a comprehensive agency insider threat policy to be approved by the agency head within 180 days of the effective date of the National Insider Threat Policy. Agency policies shall include internal guidelines and procedures for the implementation of the standards contained herein. 3. Submit to the agency head an implementation plan for establishing an insider threat program and annually thereafter a report regarding progress and/or status within that agency. At a minimum, the annual reports shall document annual accomplishments, resources allocated, insider threat risks to the agency, recommendations and goals for program improvement, and major impediments or challenges. 4. Ensure the agency’s insider threat program is developed and implemented in consultation with that agency’s Office of General Counsel and civil liberties and privacy officials so that all insider threat program activities to include training are conducted in accordance with applicable laws, whistleblower protections, and civil liberties and privacy policies. 5. Establish oversight mechanisms or procedures to ensure proper handling and use of records and data described below, and ensure that access to such records and data is restricted to insider threat personnel who require the information to perform their authorized functions. 6. Ensure the establishment of guidelines and procedures for the retention of records 7. and documents necessary to complete assessments required by Executive Order 13587. Facilitate oversight reviews by cleared officials designated by the agency head to ensure compliance with insider threat policy guidelines, as well as applicable legal, privacy and civil liberty protections. 1. Build and maintain an insider threat analytic and response capability to manually 2. and/or electronically gather, integrate, review, assess, and respond to information derived from CI, Security, lA, HR, LE, the monitoring of user activity, and other sources as necessary and appropriate. Establish procedures for insider threat response action(s), such as inquiries, to clarify or resolve insider threat matters while ensuring that such response action(s) are centrally managed by the insider threat program within the agency or one of its subordinate entities. 3. Develop guidelines and procedures for documenting each insider threat matter reported and response action(s) taken, and ensure the timely resolution of each matter. INSIDER THREAT PROGRAM PERSONNEL: Agency heads shall ensure personnel assigned to the insider threat program are fully trained in: 1. Counterintelligence and security fundamentals to include applicable legal issues; 2. Agency procedures for conducting insider threat response action(s); 3. Applicable laws and regulations regarding the gathering, integration, retention, safeguarding, and use of records and data, including the consequences of misuse of such information; 4. Applicable civil liberties and privacy laws, regulations, and policies; and 5. Investigative referral requirements of Section 811 of the Intelligence Authorization Act for FY 1995, as well as other policy or statutory requirements that require referrals to an internal entity, such as a security office or Office of Inspector General, or external investigative entities such as the Federal Bureau of Investigation, the Department of Justice, or military investigative services 1. Direct CI, Security, lA, HR, and other relevant organizational components to securely provide insider threat program personnel regular, timely, and, if possible, electronic access to the information necessary to identify, analyze, and resolve insider threat matters. Such access and information includes, but is not limited to, the following: a. Counterintelligence and Security. All relevant databases and files to include, but not limited to, personnel security files, polygraph examination reports, facility access records, security violation files, travel records, foreign contact reports, and financial disclosure filings. b. Information Assurance. All relevant unclassified and classified network information generated by IA elements to include, but not limited to, personnel usernames and aliases, levels of network access, audit data, unauthorized use of removable media, print logs, and other data needed for clarification or resolution of an insider threat concern. c. Human Resources. All relevant HR databases and files to include, but not limited to, personnel files, payroll and voucher files, outside work and activities requests, disciplinary files, and personal contact records, as may be necessary for resolving or clarifying insider threat matters. 2. Establish procedures for access requests by the insider threat program involving particularly sensitive or protected information, such as information held by special access, law enforcement, inspector general, or other investigative sources or programs, which may require that access be obtained upon request of the Senior Official(s). 3. Establish reporting guidelines for CI, Security, lA, HR, and other relevant organizational components to refer relevant insider threat information directly to the insider threat program. 4. Ensure insider threat programs have timely access, as otherwise permitted, to available United States Government intelligence and counterintelligence reporting information and analytic products pertaining to adversarial threats. MONITORING USER ACTIVITY ON NETWORKS: Agency heads shall ensure insider threat programs include: 1. Either internally or via agreement with external agencies, the technical capability, subject to appropriate approvals, to monitor user activity on all classified networks in order to detect activity indicative of insider threat behavior. When necessary, Service Level Agreements (SLAs) shall be executed with all other agencies that operate or provide classified network connectivity or systems. SLAs shall outline the capabilities the provider will employ to identify suspicious user behavior and how that information shall be reported to the subscriber’s insider threat personnel. 2. Policies and procedures for properly protecting, interpreting, storing, and limiting access to user activity monitoring methods and results to authorized personnel. 3. Agreements signed by all cleared employees acknowledging that their activity on any agency classified or unclassified network, to include portable electronic devices, is subject to monitoring and could be used against them in a criminal, security, or administrative proceeding. Agreement language shall be approved by the Senior Official(s) in consultation with legal counsel. 4. Classified and unclassified network banners informing users that their activity on the network is being monitored for lawful United States Government-authorized purposes and can result in criminal or administrative actions against the user. Banner language shall be approved by the Senior Official(s) in consultation with legal counsel. EMPLOYEE TRAINING AND AWARENESS: Agency heads shall ensure insider threat programs: 1. Provide insider threat awareness training, either in-person or computer-based, to all cleared employees within 30 days of initial employment, entry-on-duty (EOD), or following the granting of access to classified information, and annually thereafter. Training shall address current and potential threats in the work and personal environment, and shall include, at a minimum, the following topics: a. The importance of detecting potential insider threats by cleared employees and reporting suspected activity to insider threat personnel or other designated officials; b. Methodologies of adversaries to recruit trusted insiders and collect classified information; c. Indicators of insider threat behavior and procedures to report such behavior; and d. Counterintelligence and security reporting requirements, as applicable. 2. Verify that all cleared employees have completed the required insider threat awareness training contained in these standards. ‘ 3. Establish and promote an internal network site accessible to all cleared employees to provide insider threat reference material, including indicators of insider threat behavior, applicable reporting requirements and procedures, and provide a secure electronic means of reporting matters to the insider threat program. Source GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00.21.3.1 (Washington, D.C.: Nov. 1, 1999); Office of the Director of National Intelligence, Office of the National Counterintelligence Executive, Protecting Key Assets: A Corporate Counterintelligence Guide (11137482 ID 6-11) DOD 5200.8-R; DODI 2000.16; DODI 6055.17; Joint Pub. 3-07.2; Secretary of Defense, Final Recommendations of the Fort Hood Follow-on Review, memorandum (Aug. 18, 2010) In addition to the individuals named above, Tommy Baril, Assistant Director; Jeffrey Knott, Assistant Director; Tracy Barnes; Lon Chin; Grace Coleman; Nicole Collier; Kristi Dorsey; Ashley Houston; Amie Lesser; Richard Powelson; Terry Richardson; Monica Savoy; and Jennifer Spence made key contributions to this report. Defense Department Cyber Efforts: Definitions, Focal Point, and Methodology Needed for DOD to Develop Full-Spectrum Cyberspace Budget Estimates. GAO-11-695R. Washington, D.C.: July 29, 2011. Defense Department Cyber Efforts: DOD Faces Challenges In Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Defense Department Cyber Efforts: More Detailed Guidance Needed to Ensure Military Services Develop Appropriate Cyberspace Capabilities. GAO-11-421. Washington, D.C.: May 20, 2011. Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to Building and Access Control Systems. GAO-15-6. Washington, D.C.: December 12, 2014. Information Security: Agencies Need to Improve Oversight of Contractor Controls. GAO-14-612. Washington, D.C.: August 8, 2014. Information Security: Agencies Need to Improve Cyber Incident Response Practices. GAO-14-354. Washington, D.C.: April 30, 2014. Critical Infrastructure Protection: More Comprehensive Planning Would Enhance the Cybersecurity of Public Safety Entities’ Emerging Technology. GAO-14-125. Washington, D.C.: January 28, 2014. Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent. GAO-14-34. Washington, D.C.: December 9, 2013. Federal Information Security: Mixed Progress in Implementing Program Components; Improved Metrics Needed to Measure Effectiveness. GAO-13-776. Washington, D.C.: September 26, 2013. Cybersecurity: National Strategy, Roles, and Responsibilities Need to Be Better Defined and More Effectively Implemented. GAO-13-187. Washington, D.C.: February 14, 2013. Information Security: Better Implementation of Controls for Mobile Devices Should Be Encouraged. GAO-12-757. Washington, D.C.: September 18, 2012. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. Washington, D.C.: July 17, 2012. Information Security: Cyber Threats Facilitate Ability to Commit Economic Espionage. GAO-12-876T. Washington, D.C.: June 28, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can be Done to Promote Its Use. GAO-12-92. Washington, D.C. December 9, 2011. Personnel Security Clearances: Additional Guidance and Oversight Needed at DHS and DOD to Ensure Consistent Application or Revocation Process. GAO-14-640. Washington, D.C.: September 8, 2014. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process.GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. | Since 2010, the United States has suffered grave damage to national security and an increased risk to the lives of U.S. personnel due to unauthorized disclosures of classified information by individuals with authorized access to defense information systems. Congress and the President have issued requirements for structural reforms and a new program to address insider threats. A 2014 House Committee on Armed Services report included a provision that GAO assess DOD's efforts to protect its information and systems. This report evaluates the extent to which (1) DOD has implemented an insider-threat program that incorporates minimum standards and key elements, (2) DOD and others have assessed DOD's insider-threat program, and (3) DOD has identified any technical and policy changes needed to protect against future insider threats. GAO reviewed studies, guidance, and other documents; and interviewed officials regarding actions that DOD and a nonprobability sample of six DOD components have taken to address insider threats. The Department of Defense (DOD) components GAO selected for review have begun implementing insider-threat programs that incorporate the six minimum standards called for in Executive Order 13587 to protect classified information and systems. For example, the components have begun to provide insider-threat awareness training to all personnel with security clearances. In addition, the components have incorporated some of the actions associated with a framework of key elements that GAO developed from a White House report, an executive order, DOD guidance and reports, national security systems guidance, and leading practices recommended by the National Insider Threat Task Force. However, the components have not consistently incorporated all recommended key elements. For example, three of the six components have developed a baseline of normal activity—a key element that could mitigate insider threats. DOD components have not consistently incorporated these key elements because DOD has not issued guidance that identifies recommended actions beyond the minimum standards that components should take to enhance their insider-threat programs. Such guidance would assist DOD and its components in developing and strengthening insider-threat programs and better position the department to safeguard classified information and systems. DOD and others, such as the National Insider Threat Task Force, have assessed the department's insider-threat program, but DOD has not analyzed gaps or incorporated risk assessments into the program. DOD officials believe that current assessments meet the intent of the statute that requires DOD to implement a continuing gap analysis. However, DOD has not evaluated and documented the extent to which the current assessments describe existing insider-threat program capabilities, as is required by the law. Without such a documented evaluation, the department will not know whether its capabilities to address insider threats are adequate and address statutory requirements. Further, national-level security guidance states that agencies, including DOD, should assess risk posture as part of insider-threat programs. GAO found that DOD components had not incorporated risk assessments because DOD had not provided guidance on how to incorporate risk assessments into components' programs. Until DOD issues guidance on incorporating risk assessments, DOD components may not conduct such assessments and thus not be able to determine whether security measures are adequate. DOD components have identified technical and policy changes to help protect classified information and systems from insider threats in the future, but DOD is not consistently collecting this information to support management and oversight responsibilities. According to Office of the Under Secretary of Defense for Intelligence officials, they do not consistently collect this information because DOD has not identified a program office that is focused on overseeing the insider-threat program. Without an identified program office dedicated to oversight of insider-threat programs, DOD may not be able to ensure the collection of all needed information and could face challenges in establishing goals and in recommending resources and improvements to address insider threats. This is an unclassified version of a classified report GAO issued in April 2015. GAO recommends that DOD issue guidance to incorporate key elements into insider-threat programs, evaluate the extent to which programs address capability gaps, issue risk-assessment guidance, and identify a program office to manage and oversee insider-threat programs. DOD agreed or partially agreed with all of the recommendations, and described actions it plans to take. However, DOD's actions may not fully address the issues as discussed in the report. |
This work was done in conjunction with a separate review of the Port Security Grant Program. See GAO-12-47. and Training. However, since its creation in April 2007, FEMA’s GPD has been responsible for the program management of DHS’s preparedness grants. GPD consolidated the grant business operations, systems, training, policy, and oversight of all FEMA grants and the program management of preparedness grants into a single entity. GPD works closely with other DHS entities to manage several grants, including the USCG for the PSGP and TSA for the TSGP. From fiscal years 2002 through 2011, DHS distributed approximately $20.3 billion through four grant programs: SHSP, UASI, PSGP, and TSGP. See table 1 for a breakdown of the funding for these programs. Federal grants, including SHSP, UASI, PSGP, and TSGP generally follow the grant life cycle shown in figure 1 of announcement, application, award, postaward, and closeout. A grant program may be established through legislation––which may specify particular objectives, eligibility, and other requirements—and a program may also be further defined by the grantor agency. For competitive grant programs, the public is notified of the grant opportunity through an announcement, and potential grantees must submit applications for agency review. In the application and award stages, the agency identifies successful applicants or legislatively defined grant recipients and awards funding to them. The postaward stage includes payment processing, agency monitoring, and grantee reporting, which may include financial and performance information. The closeout phase includes preparation of final reports and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. SHSP, UASI, PSGP, and TSGP are specific grant programs nested under a larger framework of national preparedness. The broader initiatives described below, some of which are in development, are intended to help determine preparedness goals and the capabilities necessary to achieve these goals. Grants programs such as the four we reviewed can then help facilitate specific investments to close identified capability gaps. The purpose and status of the larger preparedness framework affects SHSP, UASI, PSGP, and TSGP in a number of ways, including the development of grant performance metrics to assess the effectiveness of the programs. In December 2003, the President issued Homeland Security Presidential Directive-8 (HSPD-8), which called on the Secretary of Homeland Security to coordinate federal preparedness activities and coordinate support for the preparedness of state and local first responders, and directed DHS to establish measurable readiness priorities and targets. In October 2006, the Post-Katrina Emergency Management Reform Act was enacted, which requires FEMA to develop specific, flexible, and measurable guidelines to define risk-based target preparedness capabilities and to establish preparedness priorities that reflect an appropriate balance between the relative risks and resources associated with all hazards. In September 2007, DHS published the National Preparedness Guidelines. The purposes of the guidelines are to: organize and synchronize national—including federal, state, local, tribal, and territorial—efforts to strengthen national preparedness; guide national investments in national preparedness; incorporate lessons learned from past disasters into national preparedness priorities; facilitate a capability-based and risk-based investment planning process; and establish readiness metrics to measure progress and a system for assessing the nation’s overall preparedness capability to respond to major events, especially those involving acts of terrorism. Each of the grant programs in our review has specific strategies that are aligned with the overall federal national preparedness guidelines, as the following examples illustrate. State and Urban Area Homeland Security Strategies (all four grants): These strategies are designed to (1) provide a blueprint for comprehensive, enterprise wide planning for homeland security efforts; and (2) provide a strategic plan for the use of related federal, state, local, and private resources within the state and/or urban area before, during, and after threatened or actual domestic terrorist attacks, major disasters, and other emergencies. State and urban area homeland security strategies are required by FEMA for receiving SHSP and UASI funding. Port-Wide Risk Mitigation Plan (PSGP): The primary goal of these plans is to provide a port area with a mechanism for considering its entire port system strategically as a whole, and to identify and execute a series of actions designed to effectively mitigate risks to the system’s maritime critical infrastructure. FEMA requires a Port-Wide Risk Mitigation Plan for receiving PSGP funding for the high-risk ports, known as Groups I and II, as discussed in table 2. Regional Transit Security Strategy (TSGP): These strategies serve as the basis on which funding is allocated to address regional transit security priorities, and are the vehicles through which transit agencies may justify and access other funding and available resources. TSA requires a Regional Transit Security Strategy for receiving TSGP funding. On March 30, 2011, the President issued Presidential Policy Directive-8 (PPD-8), which directs the development of a national preparedness goal and the identification of the core capabilities necessary for preparedness. PPD-8 replaces HSPD-8. FEMA officials noted that the National Preparedness System affirms the all- hazards risk-based approach to national preparedness. FEMA officials further noted that PPD-8 looks to build on the efforts already in place, including those that supported the Post-Katrina Emergency Management Reform Act and the 2009 National Infrastructure Protection Plan. PPD-8 has specific deadlines for deliverables: 180 days for the National Preparedness Goal, 240 days for a description of the National Preparedness System, and 1 year for a National Preparedness Report. The four grant programs in our review—SHSP, UASI, PSGP, and TSGP—have overlapping goals, project types, and funding jurisdictions, which increases the risk of duplication among the programs. Although the specifics of the four programs vary, they share the overarching goal of enhancing the capacity of state and local emergency responders to prevent, respond to, and recover from a terrorism incident involving chemical, biological, radiological, nuclear, or other explosive devices, or cyber attacks. More specifically, each program funds similar projects such as training, planning, equipment, and exercises. For example, the four programs have overlapping lists of allowable costs, so certain types of equipment, such as communication radios, may be purchased through each grant program. Further, although the programs target different constituencies, such as states and counties, urban areas, and port or transit stakeholders, there is overlap across recipients. For example, each state and eligible territory receives a legislatively mandated minimum amount of SHSP funding to help ensure that all areas develop a basic level of preparedness, while UASI explicitly targets urban areas most at However, many jurisdictions within designated risk of terrorist attack.UASI areas also apply for and receive SHSP funding. Similarly, a port stakeholder in an urban area could receive funding for patrol boats through both PSGP and UASI funding streams, and a transit agency could purchase surveillance equipment with TSGP or UASI dollars. More broadly, any designated high-risk urban area located near major waterways can receive funding through SHSP, UASI, PSGP, and TSGP sources. In March 2011, we reported that overlap among government programs or activities can be harbingers of unnecessary duplication. Further, we commented on FEMA’s full suite of 17 fiscal year 2010 preparedness programs, including the four programs in this review, and noted that FEMA needed to improve oversight and coordination of its grant awards. ensure that these four grant programs, which distributed over $20 billion dollars in funding to grant recipients from fiscal years 2002 through 2011, are allocating resources effectively. Table 2 below describes the basic purposes, the types of projects funded, and the eligible applicants of the SHSP, UASI, PSGP, and TSGP programs. GAO-11-318SP. Urban Areas Security Initiative (UASI) UASI provides federal assistance to address the unique needs of high- threat, high-density urban areas, and assists them in building an enhanced and sustainable capacity to prevent, protect, respond to, and recover from acts of terrorism. Port Security Grant Program (PSGP) PSGP provides federal assistance to strengthen the security of the nation’s ports against risks associated with potential terrorist attacks by supporting increased port wide risk management, enhanced domain awareness, training and exercises, and expanded port recovery capabilities. Transit Security Grant Program (TSGP) TSGP provides funds to owners and operators of transit systems (which include intracity bus, commuter bus, ferries, and all forms of passenger rail) to protect critical surface transportation infrastructure and the traveling public from acts of terrorism and to increase the resilience of transit infrastructure. SAA/ 50 states, DC, and territories. Port Areas: Groups I and II (highest risk); Group III and “All Other Port Areas” (lower risk). Selected transit agencies and ferry systems within high risk urban areas. allocate UASI funds to port and transit stakeholders In fiscal year 2011, Tier I UASI areas included the 11 highest risk urban areas and were allocated about 82 percent of the total UASI funding available; Tier II included the other 20 candidate areas and were allocated the remaining 18 percent funding. Tier I and II Urban Area are determined using a DHS risk model that incorporates threat, vulnerability, and consequence. A DHS risk model determines the port areas at high risk of a terrorist attack and DHS places them in either Group I (highest risk group), Group II (next highest risk group) or Group III. In fiscal year 2011, there were 7 port areas in Group I and 48 port areas in Group II. Port areas in Group I are considered to be the highest risk port areas in the nation. Ports not identified in Group I, II, or III are eligible to apply for funding as part of the All Other Port Areas Group. For additional information on the PSGP and port area groups, see GAO-12-47. FEMA’s ability to track which projects receive funding among the four grant programs is varied because the project-level information FEMA has available to make award decisions—including grant funding amounts, grant recipients, and grant funding purposes—also varies by program. This is due to differences in the grant programs’ administrative processes. For example, in some cases, FEMA relies on stakeholders to review and recommend projects for grant funding—adding layers to the review process. Delegating administrative duties to stakeholders reduces FEMA’s administrative burden, but also contributes to FEMA having less visibility over some grant applications, specifically those funded via SHSP and UASI. A combination of federal statutes and DHS policy determine specific grant allocation mechanisms and the federal partners involved in grants administration. Figure 2 below describes the federal agencies involved, the path of the grant funds to the final recipient, and the application and award process for each grant, as of fiscal year 2011. As depicted in figure 2, grant funding follows a different path to final recipients depending on the program’s administrative process. For example, grant awards made under SHSP and UASI go through three steps before the money reaches the final grant recipient. First, DHS awards SHSP and UASI funds through FEMA to a designated SAA— typically a state homeland security or emergency management office. The SAA then passes funds to subrecipients, such as county or city governments or designated urban areas. These subrecipients/local governments may then further distribute SHSP and UASI funds to other entities, including individual law enforcement agencies. It is these other entities that will ultimately spend the grant funds to implement security projects. Because state governments are required by the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) to have a role in the application process and distribution of SHSP and UASI funding, and because of the thousands of individual projects that comprise these programs, FEMA relies on the SAAs to administer the awards to smaller entities. In delegating significant grants administration duties to the SAA for the larger SHSP and UASI programs, FEMA officials recognized the trade-off between decreased visibility over grant funding, subrecipients, and specific project-level data in exchange for their reduced administrative burden. For these two programs, the SAA, as the official grant recipient, assumes responsibility for holding subrecipient entities accountable for their use of funding, including ensuring that recipients use grant funds to pay costs that are allowable (e.g., reasonable and necessary for proper performance of the award). states’ capacities to effectively administer and coordinate their grants vary considerably. Among other requirements, grant funds may only be used for allowable costs. Allowable costs are those that, among other things, are reasonable and necessary for proper and efficient performance and administration of federal awards, and a cost is reasonable if, in its nature and amount, it does not exceed that which would be incurred by a prudent person under the circumstances prevailing at the time the decision was made to incur the cost. See 2 C.F.R. pt. 225. In this report, potential “overlap” and “duplication” generally refer to two or more SHSP, UASI, PSGP, or TSGP projects that address the same preparedness need and could be redundant or unnecessary if not coordinated. In contrast, FEMA receives far fewer applications for TSGP and PSGP funds and awards grant funding more directly to the final grant recipients, with one and two steps, respectively, rather than three steps. As a result, FEMA has a greater ability to track grant funding, specific funding recipients, and funding purposes for these two smaller grant programs. Beginning in fiscal year 2009, appropriations acts required FEMA to award TSGP funds directly to transit authorities instead of through SAAs. Per FEMA policy, the agency distributes PSGP funds to local FAs who then distribute grants to local entities within the port area, but FEMA is directly involved in this process. Due to the legal and departmental policies that establish a more direct award process for PSGP and TSGP, along with the smaller scope of those programs, FEMA has more information and is better able to track these grants through to the end user of the grant monies. Differences in administrative processes among each of the four grant programs also impact the extent to which federal, state and local entities share responsibility for prioritizing and selecting the individual preparedness projects that will ultimately receive funding. Due to its greater involvement with the PSGP and TSGP project selection at the local level, DHS generally has more information on specific PSGP and TSGP projects than SHSP and UASI projects. For example, DHS components—USCG and TSA—are involved with the PSGP and TSGP selection process, which provides DHS with additional information over the use of grant funds. For instance, TSGP projects from fiscal years The 2007 through 2010 were selected by regional working groups. regional groups based their project selection on Regional Transit Security Strategies that each transit region had developed. For this grant program TSA had better information about the funding as well as influence over the project selection because TSA set the parameters for and approved the transit security strategies, and final project selection was based on TSA approval. The regional working groups choose projects for the highest risk Tier 1 regions. The remaining transit regions’—Tier 2 regions—projects are fully competitive. Similarly, the USCG, and in particular the Captain of the Port, exerts influence over the PSGP project selection process, given the agency’s maritime security expertise and role in the PSGP award process. PSGP project applications also undergo a second national review facilitated by FEMA that includes the USCG, the Department of Transportation’s Maritime Administration, and other stakeholders. Along with federal stakeholders, numerous local stakeholders are involved with the PSGP selection process and in many locations are required to base their grant award decisions largely on FEMA-required port security mitigation strategies. These strategies also require FEMA approval before PSGP grants can be awarded to port areas. Thus, for these projects, FEMA is more involved and has greater information on which to base award decisions. In contrast, local officials select SHSP and UASI projects with less federal involvement, although the projects must comport with various program rules, such as those related to allowable activities or costs, and address any funding priorities stipulated in the grant guidance. For SHSP, FEMA awards funds to states for certain broad purposes, such as interoperable communications, but federal law and DHS policies allow states to distribute these funds to individual projects or jurisdictions using different mechanisms, given different local conditions and needs. One state may choose to use a consensus-based approach to disburse the funds to counties, for example, while another may distribute funding equally to all of its jurisdictions. For example, in Washington State, SHSP grant applications are reviewed by four distinct entities––the state’s homeland security committee, the all-hazards statewide emergency council, the state’s domestic security executive group, and the governor’s office–– prior to the state making risk-informed allocation decisions. In contrast, one regional government council in Texas allocated SHSP funds equally to all eligible jurisdictions within its region regardless of their risk level. For UASI grants, FEMA requires each region to create its own UAWG, but does not participate in these groups. The UAWGs convene to select individual projects for UASI funding based on the FEMA-identified grant priorities for that grant year that are also consistent with the area’s grant application and state and urban area strategic plans. For example, in 2009 the New York City UAWG identified protecting critical infrastructure and key resources as one of eight goals in its homeland security strategic plan, received UASI funding for this purpose, and selected and allocated funds to specific projects in the urban area related to this goal. FEMA approves all applications and strategic plans, which give the agency a broad idea of what grant applicants intend to accomplish at the state and local level. However, selection of specific projects occurs through local- level working groups. As a result of the differing levels of DHS involvement in project selection for each of the grants programs, DHS generally has more project information for specific PSGP and TSGP projects than SHSP and UASI projects. When making preparedness grant awards, FEMA decisions are based on less specific project-level information for SHSP and UASI programs than for PSGP and TSGP, which puts the agency at greater risk of funding unnecessarily duplicative projects across all programs. In our prior work on overlap and duplication, we identified challenges agencies face in collecting and analyzing the information needed to determine whether unnecessary duplication is occurring. For example, we identified 44 federal employment and training programs that overlap with at least 1 other program in that they provide at least one similar service to a similar population. However, our review of 3 of the largest 44 programs showed that the extent to which individuals actually receive the same services from these programs is unknown due to program data limitations. We found similar data limitations in this review as FEMA bases its awards for SHSP, UASI, PSGP, and TSGP in part upon IJs which contain limited information. For the SHSP and UASI programs, states and eligible urban areas submit IJs for each program with up to 15 distinct investment descriptions that contain general proposals to address capability gaps in wide-ranging areas such as interoperability communications or critical infrastructure protection. Each IJ may encompass multiple specific projects to different jurisdictions or entities, but project-level information, such as a detailed list of subrecipients or equipment costs, is not required by FEMA. According to FEMA, data system limitations, the high volume of individual SHSP and UASI projects, and the desire to give states and urban areas increased flexibility to add or modify specific projects after the award period contributed to less detailed IJs. In contrast, FEMA makes PSGP and TSGP award decisions based on federal reviews of IJs that contain information about specific projects, providing FEMA officials with more detailed knowledge of what is being requested and what is being funded by these programs. Furthermore, before awards are made, FEMA directs PSGP and TSGP applicants to submit detailed budget summaries, but does not call for such information from SHSP and UASI applicants. The 9/11 Commission Act establishes minimum application requirements for SHSP and UASI, such as a description of how funds will be allocated, but the act does not call for specific project data. For example, with SHSP, the statute requires states to include in their grant applications the purpose for the grant funds, a description of how they plan to allocate funds to local governments and Indian tribes, and a budget showing how they intend to expend the funds. FEMA officials stated that the SHSP and UASI IJ format meet these statutory requirements, albeit at “a high summary level.” To improve the level of information that FEMA has available for making grant award decisions, FEMA is considering collecting more detailed information on proposed grant projects. In May 2011, a FEMA report based on the work of a Reporting Requirements Working Group recommended collecting additional project information at the application stage. Specifically, the FEMA report recommended that the agency modify the IJ format for SHSP and UASI applications to include a detailed project list. This project list would contain information that is currently collected through the BSIR later in the grant cycle after FEMA makes grant awards.collecting additional information at the application stage could be initiated in the fiscal year 2013 grant cycle, according to FEMA. Although collecting this additional information may be useful to FEMA, we determined that the level of information contained in the BSIR alone would not provide sufficient project information to identify and prevent potentially unnecessary duplication within or across grant programs. If this recommendation is implemented, the policy of To make this determination, we reviewed the type of information that FEMA would have available at the application stage if it implemented the report recommendation. Specifically, we reviewed IJ and BSIR information for the 1,957 grant projects awarded through the four grant programs to five urban areas––Houston, Jersey City/Newark, New York City, San Francisco, and Seattle––for fiscal years 2008 through 2010. Our analysis determined that 140 of the projects, or 9.2 percent of the overall funding associated with these projects––about $183 million–– lacked sufficient detail to determine whether these projects were unnecessarily duplicative or had involved coordination during the state’s planning or selection processes to prevent any unnecessary duplication. Table 3 further illustrates the challenge that FEMA would face in identifying potential duplication using the BSIR data for SHSP and UASI as recommended by the report. For example, table 3 contains SHSP, UASI, and PSGP project information from a single jurisdiction in one of the five urban areas we reviewed and shows the level of detail that FEMA would have available to compare projects. The overlap in the descriptions of the project types and titles suggest that duplication could be occurring among three of the four grant programs, and warranted further analysis. After identifying the projects that appeared to be potentially duplicative, we contacted the SAA and FA for this state, and officials provided us with extended narratives, coordination details, and subrecipient lists. It was not until we reviewed this additional, more detailed information that we could ascertain that these four projects were not duplicative, but rather were part of a larger, coordinated public safety interoperability and video initiative taking place in the region. Table 4 below contains a second example of project data associated with BSIR and IJ information from a single jurisdiction in one of the five urban areas we reviewed. Again, we identified the potential for duplication because of the similarities in funded projects for both the SHSP and TSGP. Both of the projects identified below are related to the purchase of chemical, biological, radiological, and nuclear detection equipment (CBRNE). However, upon examining additional state-provided information and the TSGP IJ, we had sufficient information to determine that these projects were distinct and involved separate equipment. However, as with the previous example in table 3, FEMA would not be able to make these determinations using only BSIR data. Based on our analysis using BSIR and IJ project data, we were able to ascertain that over 90 percent of the projects we reviewed had sufficient detail to determine that the projects (1) were substantively different and not likely duplicative, or (2) involved coordination to prevent any unnecessary duplication. Furthermore, our subsequent analysis using additional information from state and local grant recipients indicated that none of these projects were duplicative. Nonetheless, we believe that more detailed project information could be of value to FEMA in its grant review process since, as demonstrated above, the information currently being considered does not always allow for the necessary differentiation between projects funded by the four grant programs. Moreover, FEMA–– through its own internal analysis––and the OIG have both separately concluded in recent years that FEMA should use more specific project- level data in making grant award decisions, especially for SHSP and UASI, in order to identify and mitigate potential duplication. Specifically, in a March 2010 report, the OIG noted that the level of detail in IJs and in other grant program applications was not sufficient for FEMA to identify duplication and redundancy. In its written comments to the OIG, the DHS Office of Infrastructure Protection concurred with this assessment, noting that a SHSP IJ “was little more than a checklist of previous funding with a brief strategy narrative.” Further, Standards for Internal Control in the Federal Government state that program managers need operational and financial data to determine whether they are meeting their goals for accountability for effective and efficient use of resources. FEMA has acknowledged the agency’s difficulties in effectively using grants data and is taking steps to improve its data collection and utilization through resolving staffing shortages by filling key grants management personnel vacancies, and taking steps to implement a new data management system. As part of this effort, FEMA introduced a new non disaster grant management system (ND Grants) for the fiscal year 2011 grant cycle, and the system is scheduled for completion by fiscal year 2014. Agency officials stated that this system, once completed, will help FEMA to manage all of its preparedness grants, and has an explicit goal of enhancing project-level data collection. In addition, the ND Grants system is anticipated to consolidate data from multiple systems and facilitate greater utilization and sharing of information. according to FEMA documentation, FEMA has not yet determined all of its specific data needs for ND Grants. As FEMA continues to develop the ND Grants system it will be important that it collects the level of data needed to compare projects across grant programs to limit the risk of funding duplicative projects. This system will replace the 13 legacy grant data systems and other processes that FEMA inherited from agencies that previously administered homeland security preparedness grants, such as the Department of Justice. We believe that the recommendation of the FEMA report to better use more specific project-level data through the BSIR, for the SHSP and UASI programs, is a step in the right direction, although our analysis demonstrated BSIR data alone do not include enough detail needed to identify potential duplication. The Director of GPD’s Preparedness Grants Division reported in September 2011 that the report recommendations were still under consideration and thus FEMA had not yet determined the specifics of future data requirements. Thus, the agency’s goal to improve data collection by collecting project-level information through its ND Grants system is a worthwhile action. This effort could provide the level of detail that FEMA needs to identify possible unnecessary duplication within and across all four grant programs. We recognize that collecting more detailed project information through ND Grants could involve additional costs. However, collecting information with this level of detail could help FEMA better position itself to assess applications and ensure that it is using its resources effectively. FEMA, as well as state and local stakeholders, have taken steps to improve coordination in selecting and administering the four grant programs, but additional FEMA action could help reduce the risk of duplication among these programs. Federal efforts to improve coordination range from improving visibility across grants to gathering additional information about grant spending. The Director of GPD’s Preparedness Grants Division discussed multiple projects that FEMA had initiated to potentially improve coordination in the grants management area. He told us that at the federal level, there is an effort within FEMA to increase planning and training exercises in order to increase its ability to track what projects are being funding by which grants. He added that this FEMA-led initiative is currently assessing public information on grants to reduce the risk of duplication. FEMA has a variety of reporting tools and guidelines that FEMA personnel have recently been working with to improve coordination and linkages between programs. For example, FEMA has started using Threat and Hazard Identification Risk Assessments (THIRA) as a way to increase FEMA’s ability to link spending at the local and federal levels.Preparedness Grants Division said that the guidance for reporting this linkage to the local level is still being discussed, with NPD taking the lead, as it currently is only required at the state level. The Director of GPD’s Officials in four of the five states we visited had taken steps to improve coordination across grant programs. State steps to improve coordination range from tracking equipment purchases to enhancing administrative tools. For example, in Texas, jurisdictions must register all deployable equipment purchased through a homeland security grant and costing more than $5,000 on a statewide registry known as the Texas Regional Response Network. The purpose of the network is to raise awareness about the assets that neighboring jurisdictions might have available for use by another jurisdiction during an emergency. According to a Texas official familiar with the initiative, the registry was established with the recognition that sharing deployable equipment would be cost effective since it would be difficult for every jurisdiction to maintain every piece of equipment that might be needed in an emergency. In New Jersey, the SAA’s office developed a Grants Tracking System, a web-enabled application to capture and track each subgrantee’s state-approved Homeland Security Grant Program–funded projects which includes SHSP and UASI. The Grant Tracking System is the state’s primary oversight mechanism to monitor the progress of each county, city, and state agency toward completing or procuring their budgeted projects or equipment. The system permits the SAA to review every program that receives funding, which allows for increased coordination across grants and efficiencies in procurement and helps alleviate the risk of funding duplicative grants. The system was included as a best practice in the OIG’s 2011 audit of New Jersey’s grant programs. Officials in all five localities we visited commented that they rely on informal structures to coordinate or identify potential unnecessary duplication––such as having the USCG Captain of the Port involved in a UAWG committee. Additionally, officials from three locations we visited also noted having tried to set up more formal coordination structures. For example, the UAWG in one Texas locality set up a peer-to-peer network with other UASI regions around the state to exchange information. A county official from a UAWG in Washington State reported that they have set up monthly small group meetings with officials from surrounding counties who deal with SHSP and UASI in an effort to exchange information and improve coordination. While FEMA, states, and local government have taken steps to improve coordination, our review of FEMA’s internal coordination showed that the agency lacks a process to coordinate reviews across the four grant programs. GPD has divided the administration of the grant programs into two separate branches: UASI and SHSP are administered by the Homeland Security Grant Program branch while PSGP and TSGP are administered by the Transportation Infrastructure Security branch. The result of this structure is that grant applications are reviewed separately by program but are not compared across each other to determine where possible unnecessary duplication may occur. As we noted earlier, each grant program we reviewed has similar goals, allowable costs, and geographic proximity. Due to this structure, these four programs share applicants as state and local entities seek to maximize grant dollars for their projects. However, since the review process for grant applications falls within each separate branch and grant program––and since there is no process in place to ensure that grant information is exchanged in the review process—FEMA cannot identify whether grant monies are being used for any unnecessary duplicative purposes. Similarly, in 2010, the OIG noted that FEMA does not have an overarching policy to coordinate grant programs and outline roles and responsibilities for coordinating applications across grant programs. Standards for Internal Control in the Federal Government call for agencies to have the information necessary to achieve their objectives and determine whether they are meeting their agencies’ strategic goals. FEMA’s strategic goals for fiscal years 2009 through 2011 included teaming with internal and external stakeholders to build partnerships and increase communication, and to streamline, standardize, and document key processes to promote collaboration and consistency across regions and programs. Because the four grant programs are being reviewed by two separate divisions, yet have similar allowable costs, coordinating the review of grant projects internally could allow FEMA to have more complete information about grant applications across the four different programs. This is necessary to identify overlap and mitigate the risk of duplication across grant applications. One of FEMA’s section chiefs noted that the primary reasons for the current lack of coordination across programs are the sheer volume of grant applications that need to be reviewed and FEMA’s lack of resources to coordinate the grant review process. She added that FEMA reminds grantees not to duplicate grant projects; however, due to volume and the number of activities associated with grant application reviews, FEMA lacks the capabilities to cross-check for unnecessary duplication. We recognize the challenges associated with reviewing a large volume of grant applications, but to help reduce the risk of funding duplicative projects, FEMA could benefit from exploring opportunities to enhance its coordination of project reviews while also taking into account the large volume of grant applications it must process. DHS implemented some performance measures for SHSP and UASI in the fiscal year 2011 grant guidance, but has not yet implemented comparable measures for PSGP and TSGP. Moreover, the type of measures DHS published in the SHSP and UASI guidance do not contribute to DHS’s ability to assess the effectiveness of these grant programs, but instead provide DHS with information to help it measure completion of tasks or activities. DHS has efforts underway to develop additional measures to help it assess grant program effectiveness; however, until these measures are implemented, it will be difficult for DHS to determine the effectiveness of grant-funded projects, which totaled $20.3 billion from fiscal years 2002 through 2011. As a part of its risk management framework, the National Infrastructure Protection Plan calls for agencies to measure progress in security improvements against sector goals using both output measures, which track the progression of tasks associated with a program or activity, and outcome measures, which help an agency evaluate the extent to which a program achieves sector goals and objectives—that is, their effectiveness. The measures that DHS implemented for SHSP and UASI through the fiscal year 2011 guidance are output measures. For example, some of the output measures implemented for SHSP and UASI include: (1) the percentage of fusion center analysts that require secret clearances that have them (or have submitted requests for them); (2) the percentage of SHSP and UASI funded personnel who are engaged in the Nationwide Suspicious Activity Reporting Initiative and have completed the training; and, (3) the approval of a State Hazard Mitigation Plan that includes a THIRA that has been coordinated with UASI(s) located in the state. Implementing output measures for the SHSP and UASI grant programs provides value and is a step in the right direction because they allow FEMA to track grant-funded activities. However, outcome measures would be more useful to FEMA in determining the effectiveness of these grant programs. As of February 2012, DHS had not implemented outcome measures for any of the four grant programs in our review. Our previous work has underscored how the absence of outcome measures has negatively impacted DHS’s ability to assess the achievement of desired program outcomes to further homeland security preparedness goals. agencies to track progress towards strategic goals and objectives by measuring results or outcomes, and it states that aligning outcome measures to goals and objectives is the key to performance management. As shown in table 5 below, FEMA had efforts under way in 2010 and 2011 to develop outcome measures for the four grant programs in our review. GAO, DHS Improved its Risk-Based Grant Programs' Allocation and Management Methods, But Measuring Programs' Impact on National Capabilities Remains a Challenge, GAO-08-488T (Washington D.C.: Mar. 11, 2008). Initiative description The Redundancy Elimination and Enhanced Performance for Preparedness Grants Actdirected the Administrator of FEMA to enter into a contract with NAPA to assist the administrator in studying, developing, and implementing performance measures to assess the effectiveness of SHSP and UASI, among other things. Expected result Three to seven proposed measures and an implementation roadmap. Status NAPA began work on this project in January 2011, with performance measure implementation scheduled for December 2011. In October 2011, NAPA provided FEMA with a copy of the final report, according to FEMA officials. As of December 2011, FEMA officials stated that the results of the NAPA study are under review within FEMA and no measures have been implemented. In January 2010, GPD formed a task force to develop measures to assess the effectiveness of PSGP and TSGP. In December 2010, this effort was transferred to NPD. Development of program-specific performance measures for PSGP and TSGP. As of December 2011, the Director of the National Preparedness Assessment Division (NPAD) within NPD told us that NPD had developed draft performance measures for the PSGP and TSGP and that those measures were undergoing review within FEMA. As a result, the official told us that it is unclear if FEMA would include these measures in its fiscal year 2012 grant guidance. For more information about FEMA’s efforts to measure the effectiveness of the PSGP, see GAO-12-47. On February 17, 2012, FEMA released the fiscal year 2012 Funding Opportunity Announcement for the PSGP and TSGP. However, this guidance did not contain performance measures. FEMA has taken steps to develop outcome-based measures through these initiatives; however, as of February 2012, FEMA had not completed its efforts. According to FEMA officials, DHS leadership has identified performance measurement as a high priority issue, and is developing a more quantitative approach for using grant expenditure data to monitor program effectiveness. Further, senior FEMA officials have noted challenges to measuring preparedness. For example, they have noted that SHSP and UASI fund a wide range of different preparedness activities, which makes it difficult to devise applicable measures. Thus, if measures are too broad they are meaningless and if too narrow they may not adequately capture the effectiveness of a range of activities. Senior FEMA officials noted another challenge in that grant program goals are purposefully broad to accommodate a broad constituency. For example, SHSP is administered in all states. However, the security conditions and preparedness needs of a state such as North Dakota are very different from those of New York, yet the grant goals, guidance, and measures would be the same for both locales. FEMA provided us with its Performance Measure Implementation Plan, an internal plan that FEMA uses for developing measures for all preparedness grants; however, this plan provides insufficient detail to guide these efforts. This plan identifies the output measures that were included in the fiscal year 2011 guidance for SHSP and UASI. Further the plan notes that NPD’s National Preparedness Assessment Division (NPAD) has developed new performance measures that seek to better capture the outcomes and overall effectiveness of preparedness grants, rather than the outputs captured by current measures; however, it does not specify what outcome measures were developed. Instead, the implementation plan provides a general approach to performance measurement as well as a list of key milestones to implement the new performance measures and refine existing measures. In addition, the implementation plan notes that it is NPAD’s goal to develop one or two measures per grant program that are both output and outcome based. However, the associated activities and milestones listed in the plan do not reference specific grant programs or project details. As a result, it is unclear what grants, or what measures, are being addressed for each milestone. According to FEMA’s current implementation plan, all performance measures should have been implemented in December 2011; however, FEMA officials reported in December 2011 that outcome measures for the four programs had not yet been implemented. According to the Project Management Institute, best practices for project management call for a variety of inputs and outputs when developing a project schedule, including the basis for date estimates, a breakdown of the work to be conducted for each program, resource capabilities and availability, and external and internal dependencies. FEMA’s implementation plan does not contain this level of detail and as a result, it remains unclear what measures will be implemented for each grant program and when this implementation will occur. Establishing performance measures for these four programs is important given their relatively large size and scope. We recognize the difficulties inherent in developing outcome-based performance measures to assess the effectiveness of these grant programs. However, DHS should continue to work towards the development of these measures to improve its ability to assess the effectiveness of these grant programs. Until DHS does so it will be difficult for it to determine the extent to which its investment through these programs––$20.3 billion from 2002 through 2011—is effectively enhancing homeland security. A revised implementation plan that includes more specific project schedule information and accurate timelines for implementation could help guide efforts and keep the development of these measures on track for successful and timely implementation. Apart from developing performance measures for each grant program, DHS also has several initiatives under way to measure the collective effectiveness of its grant programs in achieving shared program goals, as shown in table 6 below. As shown above, FEMA’S efforts to measure the collective effectiveness of its grants programs are recent and ongoing and thus it is too soon to evaluate the extent to which these initiatives will provide FEMA with the information it needs to determine whether these grant programs are effectively improving the nation’s security. While each grant program strives to identify and mitigate security concerns within its specific authority, improving the nation’s overall preparedness is dependent upon collectively addressing capability and security gaps across all programs and authorities. Thus, it is important to evaluate effectiveness across the four grant programs to determine the extent to which the security of the nation as a whole has improved and to better ensure the effective use of scarce resources. From fiscal years 2002 through 2011, DHS has distributed about $20.3 billion through four homeland security preparedness grants that specifically target state, urban, port, and transit security. We recognize that even when programs overlap, they may have meaningful differences in their eligibility criteria or objectives, or they may provide similar types of services in different ways. However, because the four DHS programs in our review have similar goals, fund similar types of projects, and are awarded in many of the same urban areas, it will be important for FEMA to take additional measures to help ensure that the risk of duplication is mitigated. FEMA has delegated significant administrative duties to the SAA for the larger SHSP and UASI programs, and FEMA officials recognize the trade-off between decreased visibility over these grants and the reduced administrative burden on FEMA. However, the limited project-level information on how funds are being used and the lack of coordinated reviews of grant applications across programs, increases the risk that FEMA could fund duplicative projects. Additional action could help mitigate this risk. For example, as FEMA develops the ND Grants system, it will be important for the agency to ensure that information collected for all grant programs provides enough detail to allow for project comparisons in order to identify any unnecessary duplication. In addition, while some steps have been taken at the federal, state, and local levels to improve coordination in administering the four grant programs, additional actions could also help reduce the risk of duplication. For example, without a process to coordinate reviews across the four grant programs, FEMA lacks the information necessary to identify whether grant monies are being used for duplicative purposes, especially since all four grant programs are being reviewed separately, yet have similar allowable costs. Thus, to reduce the risk of duplication, FEMA could benefit from exploring opportunities to enhance its coordination of project reviews across grant programs. Additionally, since DHS’s existing output-based performance measures for the SHSP and UASI programs do not provide DHS with the information it needs to assess grant effectiveness and FEMA has not yet implemented outcome-based performance measures for any of the four programs, it will be difficult for FEMA to fully assess the effectiveness of these grant programs. Because the project plan FEMA has in place to guide its efforts to develop measures does not provide adequate information to determine what measures will be implemented for each grant program and when this implementation will occur, FEMA does not have reasonable assurance that these measures will be implemented in a timely way to help assess the programs’ effectiveness. We are making three recommendations for the four grant programs. Two actions are recommended to help reduce the risk of duplication by strengthening DHS’s administration and oversight of these programs, and one action is recommended to better assess the effectiveness of these programs. To better identify and reduce the risk of duplication through improved data collection and coordination, we recommend that the FEMA Administrator: take steps, when developing ND Grants and responding to the May 2011 FEMA report recommendations on data requirements, to ensure that FEMA collects project information with the level of detail needed to better position the agency to identify any potential unnecessary duplication within and across the four grant programs, weighing any additional costs of collecting these data; and explore opportunities to enhance FEMA’s internal coordination and administration of the programs in order to identify and mitigate the potential for any unnecessary duplication. To better assess the effectiveness of these programs, we recommend that the FEMA Administrator: revise the agency’s Performance Measure Implementation Plan to include more specific project schedule information and accurate timelines in order to guide the timely completion of ongoing efforts to develop and implement outcome-based performance measures for the SHSP, UASI, PSGP, and TSGP grant programs. We provided a draft of this report to DHS for comment. We received written comments on the draft report, which are reprinted in appendix II. DHS concurred with all three recommendations, and requested that the first two recommendations be considered resolved and closed. While we believe that DHS’s planned actions, if implemented, address the intent of each recommendation, it is too soon to close any recommendation as implemented. Specifically: DHS agreed with the recommendation that FEMA take steps to ensure that it collects sufficient project information to better identify any potential unnecessary duplication, and asked that, based on actions currently under way and other proposed changes, the recommendation be closed. DHS cited the elimination of seven programs in fiscal year 2012 and the proposed restructuring of most programs under a single National Preparedness Grant Program in fiscal year 2013 as steps to eliminate unnecessary duplication. DHS also cited modifying one reporting requirement in fiscal year 2012 to better capture program-specific performance measures. While we agree that program restructuring and the cited reporting requirement change could offer FEMA the opportunity to improve its grants data and thus its visibility across programs and projects, it is too soon to assess any positive impact, especially given that the outcome of the proposed fiscal year 2013 program restructuring is uncertain and is reliant on future congressional action. Furthermore, consolidating programs alone will not guarantee that the level of project-level detail collected by FEMA will be sufficient to identify unnecessary duplication of similar efforts in the same geographic areas. We will review the status of these efforts and additional supporting evidence in the future before closing this recommendation. DHS agreed with the recommendation that FEMA explore opportunities to enhance internal coordination and administration of the programs to identify and mitigate the potential for any unnecessary duplication, and asked that, based on ongoing actions and plans, the recommendation be closed. For example, DHS stated that FEMA officials participate in an Intra-agency Grants Task Force to provide strategic links among FEMA grant programs, as well as a DHS-level task force to improve grants management across the department. DHS also stated that FEMA has formal memoranda of understanding with partner agencies/offices related to various grants administration roles and responsibilities, and continues to develop additional formal agreements. We view these as positive steps in coordinating grants administration within DHS and FEMA. However, it is not clear at this time that the various groups or formal agreements have specifically addressed preventing potential unnecessary duplication across programs or projects, or that this is a goal of the initiatives. We will review the status of these efforts and additional supporting evidence in the future before closing this recommendation. DHS agreed with the recommendation to revise the agency’s Performance Measure Implementation Plan and stated that new performance measures and a plan for data collection are in draft form. DHS also stated it will provide an update to the plan when decisions are finalized, and that these decisions will be informed by the outcome of the agency’s proposed changes to the fiscal year 2013 grant programs. DHS also provided technical comments which we incorporated into the report where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (201) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: FEMA Grants Portfolio SHSP, UASI, Metropolitan Medical Response System, Operation Stonegarden, and Citizen Corps Program collectively make up what FEMA terms the Homeland Security Grant Program. The 5 interconnected programs shared the same grant guidance in fiscal year 2011, but each program had a separate funding allocation. In addition to the contacts above, Dawn Hoff, Assistant Director, and Dan Klabunde, Analyst-in-Charge, managed this assignment. Chuck Bausell, Juli Digate, David Lutter, Sophia Payind, and Katy Trenholme made significant contributions to this report. David Alexander assisted with design, methodology, and data analysis. Linda Miller and Jessica Orr provided assistance with report development, Muriel Brown and Robert Robinson provided graphic support, and Tracey King provided legal assistance. | From fiscal years 2002 through 2011, the Department of Homeland Securitys (DHS) Federal Emergency Management Agency (FEMA) distributed approximately $20.3 billion to four grant programs: the State Homeland Security Program, Urban Areas Security Initiative, Port Security Grant Program, and Transit Security Grant Program. These programs are intended to enhance the capacity of state and local first responders to prevent, respond to, and recover from a terrorism incident. GAO was asked to evaluate the extent to which: (1) overlap and other factors among these programs could impact the risk of duplication; (2) mechanisms exist that enhance coordination and reduce the risk of duplication and how they are being implemented; and (3) DHS has implemented performance measures to evaluate the effectiveness of these programs. To address these objectives, GAO reviewed grant guidance and funding allocation methodologies. GAO also interviewed DHS officials, and grant administrators in five urban areasselected because they receive funding from all four grant programs in this reviewabout grant processes and program challenges, among other things. Multiple factors contribute to the risk of duplication among four FEMA grant programs that GAO studiedthe State Homeland Security Program (SHSP), Urban Areas Security Initiative (UASI), Port Security Grant Program, and Transit Security Grant Program. Specifically, these programs share similar goals, fund similar projects, and provide funds in the same geographic regions. Further, DHSs ability to track grant funding, specific funding recipients, and funding purposes varies among the programs, giving FEMA less visibility over some grant programs. Finally, DHSs award process for some programs bases decisions on high-level, rather than specific, project information. Although GAOs analysis identified no cases of duplication among a sample of grant projects, the above factors collectively put FEMA at risk of funding duplicative projects. FEMA officials stated that there is a trade-off between enhancing management visibility and reducing administrative burden, but also recogized that FEMA should use more specific project-level information for award decisions and have taken initial steps towards this goal. For example, FEMA is considering how to better use existing grant information and has also begun to phase in a grants management system that includes an explicit goal of collecting project-level information. However, FEMA has not determined all of its specific data requirements. As FEMA determines these requirements, it will be important to collect the level of information needed to compare projects across grant programs. Given the limitations in currently collected information, FEMA would benefit from collecting information with greater detail as this could help FEMA better position itself to assess applications and ensure that it is using its resources effectively. FEMA, as well as state and local stakeholders, have taken steps to improve coordination in administering the four programs, but FEMA could take further action. For example, FEMA does not internally coordinate application reviews across the four programs. Specifically, the programs are managed by two separate FEMA divisions which review grant applications for each program separately and there is no process in place to ensure that application information is shared among the programs during this process. Thus, it is difficult for FEMA to identify whether grant monies are being used for the same or similar purposes. FEMA could benefit from further examining its internal grant coordination process, while considering the large volume of grant applications it must process. FEMA introduced some performance measures for the UASI and SHSP programs in 2011 that add value, but these measures do not assess program effectiveness. FEMA has efforts under way to develop outcome measuresthat will focus on program effectivenessfor each of the four grant programs in this review, but has not completed these efforts. Further, the FEMA project plan that guides these efforts does not provide information on what measures will be implemented for each grant program and when this will occur. A revised project plan that includes more specific schedule information and accurate implementation timelines could help guide these efforts. DHS also has several efforts under way to measure the collective effectiveness of its grant programs in achieving shared program goals, but these efforts are recent and ongoing. Thus, it is too soon to evaluate the extent to which these initiatives will provide FEMA with the information it needs to determine whether these grant programs are effectively improving the nations security. GAO recommends that DHS: (1) collect project information with the level of detail needed to identify any unnecessary duplication; (2) explore opportunities for enhanced internal coordination in grant administration; and (3) revise its plan to ensure the timely implementation of performance measures to assess the effectiveness of these grants. DHS concurred with all recommendations. |
DOD has been unable to prepare auditable information for department- wide financial statements as required by the Government Management Reform Act of 1994. The National Defense Authorization Act (NDAA) for Fiscal Year 2010 requires that DOD develop and maintain the Financial Improvement and Audit Readiness (FIAR) Plan, which includes, among other things, the specific actions to be taken and costs associated with (1) correcting the financial management deficiencies that impair DOD’s ability to prepare timely, reliable, and complete financial management information and (2) ensuring that DOD’s financial statements are validated as ready for audit by September 30, 2017. The NDAA for Fiscal Year 2013 required that the FIAR Plan state the specific actions to be taken and the costs associated with validating the audit readiness of DOD’s Statement of Budgetary Resources (SBR) no later than September 30, 2014. However, DOD acknowledged in its November 2014 FIAR Plan Status Report that it did not meet this date and, in response to difficulties in preparing for an SBR audit, reduced the scope of initial audits to focus only on current year budget activity to be reported on a Schedule of Budgetary Activity beginning in fiscal year 2015. This is an interim step toward achieving the audit of multiple-year budget activity required for an audit of the SBR. We have previously reported our concerns regarding DOD’s emphasis on asserting audit readiness by a certain date rather than ensuring that effective processes, systems, and controls are in place to improve financial management information for day-to-day decision making. Further, according to its May 2015 FIAR Plan Status Report, DOD acknowledged that even though military departments have asserted Schedule of Budgetary Activity audit readiness, they are continuing to strengthen controls and take other steps to improve their readiness. The report also indicated that although DOD does not expect to receive unmodified (“clean”) opinions during the initial years, it determined that it is important to proceed with the audits as a means of uncovering any other remaining challenges. In August 2014, the Navy asserted audit readiness for its fiscal year 2015 Schedule of Budgetary Activity. Military pay activity represents a significant portion of obligations and outlays reported on the Navy’s Schedule of Budgetary Activity and SBR. In addition to military pay, the Navy’s Schedule of Budgetary Activity and SBR will include financial activity and balances associated with other business processes, such as civilian pay, contract pay, and reimbursable work orders. The Navy’s mission is to maintain, train, and equip combat-ready naval forces capable of winning wars, deterring aggression, and maintaining freedom of the seas. In February 2015, the Navy reported that it had about 326,000 active and 58,000 reserve servicemembers. For fiscal year 2015, Congress appropriated approximately $29 billion for Navy military pay for active and reserve servicemembers and other personnel-related costs. Appropriations for Navy and Navy Reserve personnel are 1-year appropriations available for pay, benefits, incentives, allowances, housing, subsistence, and travel primarily for Navy servicemembers. DOD’s FIAR Guidance sets forth the goals, priorities, strategy, and methodology for the Navy (as well as other DOD reporting entities and service providers) to become audit ready. Based on the FIAR Guidance, the Navy has established separate assessable units for military pay and other business processes representing significant portions of budgetary resources and financial activity (e.g., obligations and outlays) reported in its SBR to help focus efforts to achieve audit readiness. Key stakeholders within the Navy with military audit readiness responsibilities include the Office of Financial Operations and the Bureau of Naval Personnel. In March 2013, the Navy asserted audit readiness of its military pay assessable unit based on its determination that sufficient evidence existed and certain controls were operating effectively to meet specified financial reporting objectives. According to its assertion, the Navy placed additional reliance on testing of supporting documentation to meet financial reporting objectives in instances where controls were not operating effectively. In addition, based on limited tests performed prior to its assertion, the Navy identified extensive control deficiencies associated with certain key systems. For purposes of asserting audit readiness, the Navy determined that risks associated with these deficiencies were mitigated based on the results of substantive tests, reconciliations, and other tests performed in connection with its assertion efforts. Accordingly, the Navy excluded assessment of these systems from the scope of its IPA’s examination. An IPA performed a validation examination of the Navy’s assertion and reported in January 2015 that, in its opinion, the Navy’s assertion was fairly stated. As specified in its contract with the Navy, the IPA was required to perform an audit readiness validation examination to determine (1) if adequate supporting documentation exists to address all relevant financial statement assertions for all material transactions and account balances reflected on the Navy’s schedule of military pay for April 2013 and (2) if business processes and internal control activities are designed and operating effectively to limit the risk of material misstatement of the financial statements by meeting applicable financial reporting objectives. With regard to the Navy’s internal controls, the IPA assessed 34 controls supporting the Navy’s assertion and determined that, although certain controls for each financial reporting objective were effective, 14 controls associated with these objectives were either not designed effectively or not operating effectively. As a result, the IPA relied primarily on substantive tests to assess whether amounts reported on the Navy’s schedule of military pay activity for April 2013 were adequately supported by sufficient evidence. The Navy’s schedule of military pay activity for April 2013 reflected obligations of $2.25 billion and outlays of $2.19 billion associated with activities included in the scope of the Navy’s assertion; these obligations and outlays were recorded in the Navy’s general ledger and included activity related to basic pay and entitlements for allowances for officers, enlisted personnel, and midshipmen, as well as certain other military personnel costs. The schedule of military pay activity included obligations associated with servicemembers’ gross pay and related payroll taxes and associated outlays disbursed in April 2013. Obligations for reserve servicemembers’ pay and related payroll taxes are recorded in the month payroll is processed, while outlays are recorded in the month they are paid. In contrast, obligations and outlays for active servicemembers’ pay are recorded in different months based on when each payroll is processed and when related outlays occur, as shown in figure 1. The Navy also relies on service providers to ensure the audit readiness of service provider systems and business processes that support services provided to the Navy and affect its Statement of Budgetary Activity and SBR. For example, the DFAS office in Cleveland, Ohio, is responsible for computing the Navy’s military payroll using the Defense Joint Military Pay System-Active Component (DJMS-AC) and the Defense Joint Military Pay System-Reserve Component (DJMS-RC) for active and reserve servicemembers, respectively. In addition, DFAS provides significant financial reporting, disbursement, and other services to the Navy in support of the Navy’s efforts to meet essential financial management responsibilities. The Navy also relies on a variety of personnel, accounting, disbursing, and budgeting systems to process and report its military payroll, as shown in figure 2. In addition to military pay, the Navy also recorded $225 million of obligations and related outlays in April 2013 for other activities associated with its military personnel appropriations, including items such as certain uniform allowances, subsistence-in-kind, permanent change of station travel, and personnel-related reimbursable work orders. The scope of the Navy’s assertion excluded these activities as well as certain other processes involved in processing or reporting military pay financial activity or balances, including fund balance with the U.S. Department of the Treasury (Treasury); funds receipt and distribution; and certain financial reporting-related journal vouchers, adjustments, and beginning balances. According to Navy officials, readiness efforts associated with these areas were either included in the scope of other assessable units or will be assessed as part of the Navy’s fiscal year Schedule of Budgetary Activity audit. Based on documentation provided by the Navy and the results of the IPA’s audit procedures, the IPA concluded that information reported on the Navy’s schedule of military pay activity for April 2013 (April 2013 schedule) reconciled to a complete and valid population of transactions. The IPA identified a total projected error of $6.8 million, which it determined to be immaterial. We reviewed selected IPA audit documentation and selected documentation provided by the Navy, and nothing came to our attention that raised concerns beyond those identified by the IPA regarding the adequacy of the Navy’s documentation supporting the completeness and validity of the population of transactions reflected on its April 2013 schedule. The IPA concluded that the Navy provided sufficient documentation to support that amounts reported in the April 2013 schedule reconciled to a complete population of military pay transactions. The GAO/President’s Council on Integrity and Efficiency Financial Audit Manual (FAM) and FIAR Guidance recognize the importance of comparing and reconciling data produced by various systems and processes with reported amounts to provide assurance on the completeness of populations of transactions supporting them. The IPA’s determination was based on Navy-provided documentation supporting the Navy’s reconciliation of personnel and payroll data as well as documentation supporting other reconciliations independently performed by the IPA. Key reconciliations supporting the IPA’s conclusion are summarized below: Personnel and payroll reconciliation. The Navy compared Social Security numbers of servicemembers in the Navy personnel systems to Social Security numbers of servicemembers in the Navy payroll systems and identified, documented, and resolved differences between the systems to provide assurance that payroll amounts were disbursed to valid personnel. Payroll and general ledger reconciliation. Although the Navy performs monthly reconciliations of payroll and general ledger activity, the IPA performed an independent reconciliation that compared obligations and outlays in the Navy payroll systems to the obligations and outlays recorded in the Navy general ledger system and accounted for any significant disparities between data from these systems and the obligation and outlay amounts reported in the April 2013 schedule to provide assurance that the reported amounts were complete and reasonably stated. Disbursements reconciliation. The IPA independently compared disbursements from the Defense Cash Accountability System to outlays on the April 2013 schedule and to documentation supporting significant reconciling items to help ensure that reported disbursements were complete and reasonably stated. According to the IPA, the Navy performs similar reconciliations each month. To assess the reliability of the documentation, the IPA performed walk- throughs of key processes, made inquiries of Navy personnel, and analyzed the documentation provided. In addition, the IPA considered the results of its tests of the Navy’s controls for reconciling military pay activity each month, as discussed further below. Based on its evaluation of these reconciliations and documentation provided by the Navy, the IPA concluded that the Navy’s population of military pay transactions was complete. We reviewed the results of the IPA procedures to assess these reconciliations; re-performed selected procedures, such as tracing selected amounts to supporting documentation and performing recalculations; and assessed the appropriateness of the reconciling items. Based on our review, nothing came to our attention that raised concerns regarding the adequacy of the Navy’s documentation supporting the completeness of the population of transactions reflected on its April 2013 schedule. The IPA determined that the Navy provided sufficient documentation to support the accuracy and validity of selected basic pay and entitlement transactions the IPA tested. The IPA tested a statistical sample of 405 leave and earnings statements from the $1.8 billion of outlays associated with basic pay and entitlements reflected on the April 2013 schedule. The Navy provided more than 3,000 documents to support the sampled leave and earnings statements, such as orders, promotion messages, enlistment documents, and discharge documents. The IPA’s evaluation of sample item tests resulted in a projected total error of $6.8 million, and based on its determination of tolerable misstatement and materiality, the IPA concluded that the estimate of errors was immaterial. In addition, the IPA performed analytical procedures to test $400 million of activity reflected in the Navy’s April 2013 schedule, representing the portion of Federal Insurance Contributions Act (FICA) taxes and military retirement contributions for servicemembers. The IPA compared reported amounts to an estimate determined by multiplying basic pay included in the April 2013 schedule by applicable mandated rates and concluded that the reported amounts were reasonable. We reviewed the results of the IPA’s tests of military pay transactions and re-performed selected procedures to assess the reasonableness of the IPA’s conclusions. For example, we reviewed the IPA’s sampling methodology and related error projection. We also re-performed tests performed by the IPA for a nongeneralizable random sample of 21 (5 percent) of its 405 leave and earnings statement sample items, such as recalculating payment amounts and reviewing documentation provided by the Navy to assess whether transactions were adequately supported. We also re-performed the IPA’s analytical procedures related to the Navy’s portion of FICA taxes and the military retirement contributions and reviewed the IPA’s related conclusions. Based on our review, nothing came to our attention that raised concerns regarding the adequacy of the Navy’s documentation supporting military pay transactions reflected in its April 2013 schedule beyond those identified by the IPA. While the IPA determined that the Navy provided sufficient documentation to support its April 2013 schedule, the IPA also assessed the effectiveness of 34 internal controls supporting the Navy’s assertion and determined that 14 were either not designed effectively or not operating effectively. Effective internal controls and financial systems are essential for ensuring sound financial management and achieving sustainable financial statement auditability. However, deficiencies in military pay controls and selected systems identified through the Navy’s assertion and validation efforts, including the IPA’s examination, if not effectively addressed, present additional risks that could hamper the Navy’s ability to ensure future auditability. Further, the audit readiness of certain activities beyond the scope of these efforts—such as financial reporting and other business processes—remains uncertain. We also found that the Navy’s efforts to coordinate with key stakeholders and service providers responsible for performing important audit readiness tasks were not always effective. In connection with its audit readiness efforts, the Navy identified 34 control activities intended to reasonably assure the achievement of important financial reporting objectives, such as ensuring that military pay amounts do not exceed funding authority, obligations and disbursements are properly approved and recorded, and payroll is calculated and processed correctly. Based on its tests of these controls, the IPA determined that although certain controls associated with each of the Navy’s military pay-related financial reporting objectives were effective, 14 controls associated with these objectives were either not designed effectively or not operating effectively. The Navy generally concurred with the IPA’s conclusions. Examples of controls the IPA determined were effectively operating included the following: the Navy monitors personnel to ensure that they are reporting for duty as assigned, the Navy reconciles military pay obligations and outlays among its various systems each month, the Navy records a monthly adjustment for pay earned in one month but disbursed in subsequent months, and the Navy monitors military pay obligation amounts to verify it does not exceed funding authority. In contrast, examples of ineffectively operating controls as determined by the IPA included the following: the Navy did not consistently perform supervisory review of data entry of personnel transactions for accuracy and timeliness before updating the pay system, increasing the risk of misstatements in payroll amounts for service members, and the Navy did not adequately perform the triannual review of dormant unliquidated obligations for timeliness, accuracy, and completeness, and as a result, over $5 million associated with these transactions was not deobligated in a timely manner. According to the Navy, as of March 2015, its actions have addressed the IPA-identified deficiencies in the design of military pay controls as well as four of six corrective action plans intended to address deficiencies in the operating effectiveness of controls. Navy documentation on the status of corrective actions at that time indicated that efforts to address remaining deficiencies were expected to be completed by April 2015. However, the effectiveness of actions taken has not been fully verified by the Navy or independently assessed by an IPA or the DOD Office of Inspector General (OIG). As a result, the extent to which these actions have, or will, resolve underlying causes of the issues identified is unclear. According to Navy officials, the Navy is currently undergoing an IPA audit of its fiscal year 2015 Schedule of Budgetary Activity and expects the results of this audit will provide feedback on the effectiveness of the actions taken. The IPA also reported additional deficiencies in a separate management report resulting from its detailed tests of servicemembers’ leave and earnings statements. Examples of these deficiencies included the following. The Navy’s document retention policies are inadequate in that certain personnel action documentation is required to be retained for only 1 year. As a result, the Navy was unable to provide adequate documentation supporting selected members’ rank or other qualifications affecting pay. A servicemember received duplicate pay for 2-1/2 months because Navy personnel processed a transaction to correct a previous incorrectly processed transaction to extend his service end date that also erroneously created a system-generated debt owed by the servicemember to the Navy. Navy officials indicated that the correcting entry had to be processed in this manner because of a DJMS limitation. A member with over 3 years of service was incorrectly paid at a lower rate associated with over 2 years of service because of a lack of payroll system controls to prevent such errors. The Navy established nine corrective action plans to address these additional deficiencies, and Navy documentation on the status of corrective actions as of March 2015 indicated that steps for seven of these plans had been completed, and remaining actions were expected to be completed by June 2015. However, the Navy’s ability to fully resolve these findings in the short term was not always clear. For example, although the Navy developed functional requirements for a new system to help address identified document retention issues, it had not yet determined the timeline associated with developing and implementing this system. Until these deficiencies are fully resolved, the Navy will continue to face risks that could affect its ability to achieve future financial statement auditability. Effective information system controls are also essential for ensuring the integrity of information contained in and processed by these systems. For example, the Navy’s personnel systems contain rank information and other key data used to determine entitlements (e.g., basic pay and allowances) and process servicemembers’ pay. As part of its audit readiness assertion efforts, the Navy performed internal assessments of six key military payroll financial systems using the Federal Information System Controls Audit Manual and initially identified over 500 control deficiencies. As a result, the Navy excluded assessments of these systems from the scope of the IPA’s examination. These deficiencies involved multiple types of controls—such as access, interface, and business process controls—increasing the risk that financial activity may not be properly processed, recorded, or secured. For example, without effective access controls, unauthorized individuals can make undetected changes or deletions to data and authorized users can intentionally or unintentionally read, add, delete, or modify data or can execute changes outside their span of authority. Also, without effective interface controls, inaccurate or incomplete data may be shared among related systems, such as those used to process and record military payroll activity. Similarly, business process application controls are important for ensuring the completeness, accuracy, validity, and confidentiality of transaction data input, processing, and output. According to briefing slides from the Navy’s November 2014 information technology update presentation, challenges affecting the audit readiness of key systems include a lack of guidance across all systems; deficiencies in high-risk areas, such as interface control, access controls, and configuration management; and staff and resource limitations. According to this update, financial systems-related deficiencies ultimately degrade the integrity and auditability of those systems and may result in an increased effort on the part of the Navy to substantiate its activity and balances. Further, the Navy also acknowledged that ineffective information system controls could result in auditors needing to perform additional substantive tests and review more documentation to gain assurance on reported amounts rather than relying on effective controls. According to documentation on the status of corrective actions provided in February 2015, the Navy established 81 corrective action plans to address control deficiencies identified for the six military pay financial systems it assessed. Further, this documentation indicated that 23 corrective action plans had been completed. However, efforts to address remaining deficiencies for two of these systems are not expected to be completed until September 2015, and the Navy has not determined when certain access controls for one system will be resolved. Navy officials also stated that the Navy has not completed internal assessments to evaluate the effectiveness of actions taken thus far and that it plans to assess additional controls associated with these systems in the future. Until the Navy has assurance regarding the effectiveness of key financial system controls, it will continue to face risks that could affect its ability to ensure future financial statement auditability. According to its assertion, the Navy plans to prepare a schedule of military pay activity in support of its fiscal year 2015 Schedule of Budgetary Activity audit. However, because the Navy limited the scope of the IPA examination to focus on a 1-month schedule of activity, its ability to achieve auditability of a full year of activity, as required in future Schedules of Budgetary Activity and SBRs, was not fully assessed. Rather than extending the period of activity to a longer period of time to more closely approximate future audits of Schedules of Budgetary Activity, Navy officials told us that they limited the scope to 1 month to provide the time necessary to address any deficiencies that might be identified by the IPA prior to the fiscal year 2015 Schedule of Budgetary Activity examination, which began in December 2014. However, according to a FIAR Directorate official, assertion examinations should generally cover at least a 3-month period and would provide a greater level of audit readiness assurance than an examination focused on a 1- month period of activity. Further, because of the volume of transactions during a 12-month period of activity, obtaining supporting documentation may be more challenging than supporting transactions limited to a 1- month period of time. Accordingly, the Navy may be at risk concerning its ability to provide documentation supporting transactions included in the scope of future Statement of Budgetary Activity and SBR audits that cover a full year of activity. In addition, preparing Schedules of Budgetary Activity or SBRs that contain military pay balances and activity—as well as other amounts associated with Military Personnel appropriations—involves other processes not included in the scope of the Navy’s military pay assertion and related IPA examination. For example, preparing financial statements involves additional financial reporting processes, such as reconciling the Navy’s Fund Balance with Treasury, performing funds receipt and distribution activities, and recording certain journal vouchers. However, the Navy’s assurance regarding the audit readiness of these other key processes is limited. For example, although the Navy asserted the audit readiness of its Fund Balance with Treasury reconciliation process in April 2013, the DOD OIG reported that the Navy’s process did not provide reasonable assurance to support the accuracy, timeliness, and completeness of the account’s auditability. The DOD OIG identified issues affecting the Navy’s audit readiness, including the lack of detailed support for amounts used in reconciling Navy’s Fund Balance with Treasury account with Treasury’s accounts and significant information system control deficiencies. Additionally, the Navy has not asserted or undergone an examination for processes specifically related to the audit readiness of its financial statement and compilation activities. In December 2014, the Navy reported extensive deficiencies in controls associated with these activities. For example, key reconciliations were not always adequately performed or documented to identify and resolve differences between balances recorded in the Navy’s general ledger system and other Navy financial systems and balances recorded in the Defense Departmental Reporting System (DDRS). Further, amounts in DDRS are adjusted to resolve out-of-balance amounts imported from the Navy’s systems through the creation of journal vouchers without transaction-level documentation to support them. Although the Navy indicated that corrective action plans have been established to address these deficiencies, it could not provide us with information on when efforts to resolve them were expected to be completed. In addition, according to its military pay reconciliations, the Navy incurred obligations of $225 million in April 2013 associated with its military personnel appropriations that were excluded from its military pay assessable unit. Approximately $203 million, or 90 percent, of these obligations were associated with three types of activity: permanent change of station travel ($104 million), supply-related requisitions ($52 million), and reimbursable work orders ($47 million). However, independent examinations to assess these activities have not been performed or have identified significant concerns about the audit readiness of such activities. Specifically, Navy officials stated that permanent change of station travel was not addressed in other audit readiness assertions or IPA examinations and, as a result, will be tested during the Navy’s fiscal year 2015 Schedule of Budgetary Activity audit. In addition, although the Navy asserted audit readiness of its supply requisition-related activity in December 2013, officials indicated that a separate independent examination to validate this assertion will not be performed. Further, in April 2013, the Navy asserted that its reimbursable work orders assessable unit was audit ready. However, an IPA reported in January 2015 that the Navy’s schedules of reimbursable work order activity examined by the IPA were materially misstated because the Navy was unable to isolate reimbursable work order transactions from the rest of its financial transactions. Although the IPA completed its fieldwork in September 2014, its examination report was not issued until January 2015, in part because of the lack of effective coordination between the Navy and the IPA to ensure that adequate management representations were provided to the IPA in a timely manner. Specifically, the Navy did not provide a signed management representation letter consistent with auditing standards until more than 3 months after the IPA requested it. The FAM requires auditors to obtain representation letters from appropriate management officials. In addition, auditing standards list the specific representations that management must make, which include, for example, management’s responsibility for providing to the auditor all financial records and related data and its responsibility for disclosing to the auditors any known or suspected fraud. Such representations are considered part of the evidence supporting the auditor’s opinion and are to be provided prior to issuing the examination report. Further, the DOD Financial Management Regulation (FMR) requires responsible senior managers to prepare and submit management representation letters to auditors prior to the conclusion of audits and requires that the date of the letter should generally be the issuance date of the audit report. The DOD FMR further states that coordinating these dates is essential and that active cooperation and interaction between auditors and management is expected so that the management representation letter reaches the auditors in a timely manner. However, according to Navy officials, the Navy does not have a policy to ensure that these requirements are met. Rather, it relied on its standard process for the review and approval for official correspondence, which requires a minimum of 21 days to complete. In addition, the lack of effective coordination concerning the date of this letter contributed significantly to the amount of time required to provide the IPA with acceptable management representations. Specifically, although the IPA provided a properly dated draft management representation letter (to coincide with the end of its fieldwork on September 26, 2014), the Navy incorrectly dated its letter November 19, 2014. After noting this discrepancy, the IPA requested that the Navy revise the date on the letter to September 26, 2014, as originally indicated on the draft letter provided to the Navy. As shown in figure 3, this resulted in another delay, and a total of over 3 months elapsed before the Navy provided an accurate, signed management representation letter to the IPA. Without a policy that addresses the need to effectively coordinate with auditors concerning the dates of management representation letters and when they need to be provided to the auditors, the Navy remains at risk that appropriate management representations required for future audits may not be provided in a timely manner. This in turn could delay the issuance of the audit reports and hamper the Navy’s ability to submit audited financial statements within required time frames. Processes to obtain and assess classified documentation supporting certain military pay transactions selected by the IPA for testing require coordination between the IPA, the Navy, and the DOD OIG. However, because of the lack of effective procedures for coordinating these efforts and other factors, the IPA was unable to conclude on the accuracy and validity of selected sample item transactions involving classified supporting documentation totaling about $1,500. Specifically, the Navy did not effectively coordinate with the IPA to identify sample leave and earning statements requiring classified documentation support within a time frame needed for the DOD OIG and the IPA to coordinate appropriate security procedures to access and assess the documentation at affected classified environments prior to the IPA’s cutoff date for testing. Navy officials acknowledged that challenges affecting the Navy’s ability to obtain and assess such documentation could be a significant roadblock in future audits. To address this issue, the Navy developed a corrective action plan to establish an effective process for addressing audit coordination of classified documentation and is working to develop new procedures requiring budget submitting offices to coordinate with the DOD OIG and the IPA to determine appropriate security procedures for accessing and obtaining required documentation supporting military pay transactions selected for testing. DFAS processes military pay for the Navy and other DOD reporting entities and has established control objectives and control activities to provide reasonable assurance on the effectiveness of its military pay operations. In addition, DFAS also identified certain complementary controls that the Navy and other user entities—that is, those that use DFAS’s services—should also establish to provide reasonable assurance that control objectives are achieved. DOD’s FIAR Guidance requires that user entities coordinate with service providers, such as DFAS, to understand service provider user control assumptions and test those controls to ensure that they are operating effectively. The Navy linked many of the complementary controls identified by DFAS to relevant Navy control activities; however, it determined that it was not responsible for the effectiveness of certain complementary controls for approving and monitoring Navy personnel access to the Defense Joint Military Pay System (DJMS) because DFAS subsequently approves and grants DJMS access and Navy personnel have “view only” access. Although Navy personnel access to DJMS may be limited, “view only” access enables users to see and produce documentation from DJMS containing certain personally identifiable information associated with Navy servicemembers. According to Navy officials, in making its determination, the Navy had not considered how such access increases the risk of unauthorized disclosure of such information or could potentially lead to its use for unintended purposes. We shared these concerns with Navy officials, and they agreed that additional efforts were needed to further assess these risks and the need for complementary controls to address them. They also indicated that considerations of these controls will be incorporated into their strategy to work with relevant Navy and DFAS stakeholders to assess the controls and develop and implement corrective action plans to address any identified deficiencies. However, the Navy had not established milestones or set a date for when these efforts are expected to be completed. As part of its examination of the Navy’s military pay audit readiness assertion, the IPA determined that the Navy provided sufficient documentation to support its schedule of military pay activity for April 2013. However, given the limited scope of the IPA’s examination— because of the Navy limiting the audit to 1 month of activity and excluding key areas from the examination—questions concerning the Navy’s future audit readiness remain. Although the Navy has taken steps to improve military pay auditability, the extent to which it has effectively addressed deficiencies associated with key controls and selected financial systems identified through its assertion and validation efforts, including the IPA examination, remains unclear. Without effective internal controls and systems, auditors will likely be required to perform additional, more costly procedures to obtain required assurance in future financial statement audits, and the Navy’s ability to consistently produce timely, reliable financial information will remain at risk. In addition, the Navy’s effective coordination with the IPA, the DOD OIG, and service providers is essential for achieving audit readiness. However, the Navy did not effectively coordinate certain important tasks—such as providing its management representation letter to the IPA in a timely manner and assessing and implementing certain complementary controls identified by DFAS—that could hamper its ability to meet this goal. An audit of the Navy’s fiscal year 2015 Schedule of Budgetary Activity, for which military pay activity represents a significant portion of obligations and outlays, is currently under way and is expected to provide feedback on efforts necessary to achieve DOD’s goal of financial statement auditability department-wide by September 30, 2017. While this audit provides a milestone for measuring progress, we continue to stress the importance of addressing fundamental systems and control deficiencies, which will lead to lasting financial management improvements and, as a result, provide greater assurance of future audit readiness. To help improve the Navy’s audit readiness efforts for future Statement of Budgetary Activity audits, we are making the following two recommendations. We recommend that the Secretary of the Navy direct the Assistant Secretary of the Navy (Financial Management and Comptroller) to establish a policy to coordinate with auditors concerning the dating of management representation letters and when they need to be provided to auditors in future audits consistent with DOD FMR requirements and establish milestones for assessing and effectively implementing certain complementary controls identified by DFAS to help the Navy achieve its military pay-related control objectives. We provided a draft of this report to the Navy for review and comment. In written comments, reprinted in appendix II, the Navy concurred with our two recommendations and concurred that continued efforts are needed to address identified findings and corrective actions. The Navy also described actions it has taken or has under way in response to our recommendations, including actions to ensure that management representation letters are provided within required time frames and to establish milestones to ensure that military pay-related control objectives are strengthened. In its comments, the Navy recommended a modification to the title of our report to reflect the Navy’s ability to support its Schedule of Military Pay Activity for April 2013 with complete populations and sufficient documentation. In our report, we acknowledge that the Navy has taken steps to improve military pay auditability. However, significant concerns regarding its future audit readiness remain because of limitations the Navy placed on the scope of the IPA’s examination, such as the exclusion of selected systems the Navy relies on to process and report military pay activity, as well as uncertainty regarding the extent to which the Navy has effectively addressed deficiencies identified through its assertion and validation efforts. Based on our findings, we continue to stress the importance of addressing fundamental systems and control deficiencies, which will lead to lasting financial management improvements and, as a by-product, provide greater assurance of future audit readiness. As a result, we believe the title of our report appropriately focuses on efforts needed to address deficiencies discussed in this report and to improve Navy financial management, and consequently we have not revised the title as suggested by the Navy. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Defense, the Under Secretary of Defense (Comptroller)/Chief Financial Officer, the Secretary of the Navy, the Assistant Secretary of the Navy (Financial Management and Comptroller), the Director of the Defense Finance and Accounting Service, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. The objectives of our review were to determine the extent to which (1) the Navy was able to provide sufficient documentation to support a complete and valid population of detailed transactions reconcilable to its schedule of military pay activity for April 2013, and (2) the Navy’s military pay assertion and validation efforts, which include the IPA’s examination, contribute to future audit readiness. To address our first objective, we analyzed Navy and Defense Finance and Accounting Service (DFAS) documentation to gain an understanding of the Navy’s military pay audit readiness assertion and the processes, procedures, and systems used to process, report, and document Navy military pay. We also analyzed IPA and Navy documentation supporting the Navy’s schedule of military pay activity for April 2013, including reconciliations of reported amounts to relevant documentation produced by payroll, personnel, and disbursement systems and results of IPA procedures, to assess the sufficiency of evidence the Navy provided the IPA to support a complete and valid population of military payroll transactions to support its schedule of military pay activity for April 2013. In addition, we recalculated the reconciliations, traced selected amounts to supporting documentation, and assessed the appropriateness of selected reconciling items. In addition, we reviewed the IPA’s plan, testing, projections, and conclusions for whether the Navy provided sufficient documentation to support the accuracy of transactions reflected in its schedule of military pay activity for April 2013. The IPA selected a sample of 405 leave and earnings statements for testing using a stratified random sample, with the stratification based upon transaction dollar amount. We randomly selected a 5 percent nongeneralizable sample of 21 leave and earnings statements from the IPA’s sample of 405 and re-performed the IPA procedures to assess the reliability of its work. We examined Navy documentation supporting pay and entitlement transactions included on the 21 leave and earning statements sample items we tested and re- performed the pay and entitlement calculations. We also re-performed the IPA’s error projection calculations. We used the Department of Defense (DOD) Financial Improvement and Audit Readiness guidance to assess whether documentation supporting pay and entitlement transactions met established requirements. We also interviewed IPA officials to understand test case results. We also reviewed the results of the IPA’s analytical review procedures to assess the Navy’s portion of Federal Insurance Contributions Act and retirement pay contribution amounts reflected in the Navy’s schedule of military pay activity for April 2013 and recalculated and re-performed other selected IPA procedures. To address our second objective, we reviewed documentation provided by the Navy concerning (1) the Navy’s internal controls supporting its military pay audit readiness assertion; (2) the status of the Navy’s actions to address identified military pay-related internal control and information technology system deficiencies; (3) audit readiness efforts of selected military pay-related areas the Navy identified as outside of the scope of its military pay audit readiness assertion; and (4) the Navy’s coordination with the IPA, the DOD Office of Inspector General, and service providers on selected matters, such as the management representation letter and complementary controls, to identify areas where further improvements can be made to contribute to future audit readiness. We also reviewed documentation provided by the IPA, including reports related to its examination of the Navy’s military pay audit readiness assertion, results of internal control test procedures, and other audit documentation to determine the nature and extent of findings and recommendations identified by the IPA based on its examination. To assess the reasonableness of the IPA’s internal control testing, we re- performed the IPA procedures for (1) a nongeneralizable sample of 10 of the 20 controls the IPA concluded were effective and (2) the 14 controls the IPA identified as not designed or operating effectively. We also interviewed officials from DOD, the IPA, DFAS, and the Navy’s Office of Financial Operations and Bureau of Naval Personnel to obtain explanations and clarifications associated with our evaluation of the documentation. In accordance with the relevant sections of the Financial Audit Manual on relying on the work of others, we obtained and reviewed the IPA’s most recent peer review, the IPA’s statement of independence, and qualifications of key IPA personnel. We reviewed the contract, contract modifications, and statements of work for the IPA’s examination of the Navy’s military pay assertion. We attended various key meetings between the IPA and the Navy related to the IPA’s examination of the Navy’s military pay assertion. The IPA performed its work from September 2013 through September 2014 and issued its opinion report in January 2015. We conducted this performance audit from April 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, James Kernen (Assistant Director), Carl Barden, Robert Dacey, Francine DelVecchio, Wilfred Holloway, Jason Kelly, Gregory Loendorf, Sheila Miller, Jared Minsk, Marc Oestreicher, Matthew Ward, and Matthew Zaun made key contributions to this report. | DOD continues to work toward achieving auditability of its financial statements. As part of that effort, the Navy in March 2013 asserted audit readiness of its military payroll activity, which represents a significant portion of its expenditures. Based on its examination, an IPA found that the Navy's assertion, which focused in part on a 1-month schedule of military pay activity, was fairly stated. GAO was asked to assess the Navy's military pay audit readiness efforts. This report examines the extent to which (1) the Navy was able to provide sufficient documentation to support a complete and valid population of detailed transactions reconcilable to its schedule of military pay activity for April 2013 and (2) the Navy's military pay assertion and validation efforts contribute to future audit readiness. GAO reviewed the IPA's audit documentation and analyzed documentation that the Navy provided to the IPA; reviewed documentation on identified military pay control deficiencies and the status of the Navy's actions to address them; and interviewed Navy, IPA, and DFAS officials. Based on documentation provided by the Navy and the results of audit procedures, an independent public accountant (IPA) concluded that information reported on the Navy's schedule of military pay activity for April 2013 reconciled to a complete population of pay transactions that were adequately supported and valid. GAO reviewed the Navy's documentation and the IPA's related audit documentation. Nothing came to GAO's attention that raised concerns regarding the adequacy of the Navy's documentation beyond those the IPA identified and determined to be immaterial. Both the IPA's examination and the Navy's assertion and validation efforts identified additional risks to the Navy's future audit readiness. For example, the IPA found that 14 of 34 military pay controls it examined were either not designed effectively or not operating effectively. Further, the Navy limited the scope of the IPA's examination to focus on 1 month of activity to address any deficiencies identified prior to the audit of its fiscal year 2015 Statement of Budgetary Activity, which is currently under way. However, because of the volume of transactions during a 12-month period, obtaining supporting documentation may be more challenging than supporting transactions limited to a 1-month period. In addition, the Navy identified extensive deficiencies in six personnel and other key systems it relies on to process and report military pay activity. Navy officials acknowledge that additional efforts are needed to fully address these deficiencies. Questions also exist regarding the audit readiness of certain related activities beyond the scope of the Navy's military pay activities—such as financial reporting controls related to reconciling the Navy's Fund Balance with Treasury—because of extensive deficiencies or because they have not been independently examined. Achieving audit readiness also requires coordination with the IPA, the Department of Defense (DOD) Office of Inspector General, and service providers; however, the Navy did not always effectively coordinate these activities. For example, GAO found that the Navy did not (1) establish milestones to assess the effectiveness of certain of its controls associated with payroll services provided by the Defense Finance and Accounting Service (DFAS) and (2) effectively coordinate efforts to ensure that the required management representation letter was provided to the IPA in a timely manner. The audit of the Navy's fiscal year 2015 Schedule of Budgetary Activity, of which military pay activity represents a significant portion of reported obligations and outlays, is intended to help identify areas for additional focus, and facilitate efforts to achieve DOD's goal of financial statement auditability department-wide by September 30, 2017. However, without reliable controls and systems, auditors will likely need to perform additional, more costly procedures to obtain assurance in future audits, and the reliability of financial information for day-to-day decision making will remain at risk. GAO continues to stress the importance of addressing fundamental systems and control deficiencies, which will lead to lasting financial management improvements and, as a result, provide greater assurance of future audit readiness. GAO recommends that the Navy establish (1) milestones for assessing and implementing certain controls associated with payroll services provided by DFAS and (2) a policy to coordinate with auditors on providing required management representation letters in a timely manner. The Navy agreed with GAO's recommendations and described actions taken or under way to address them. |
It is perfectly legal for U.S. persons to hold money offshore. Taxpayers may hold foreign accounts and credit cards for a number of legitimate reasons. For example, taxpayers may have worked or traveled overseas extensively or inherited money from a foreign relative. As shown in figure 1, although holding money offshore is legal, taxpayers must generally report their control over accounts valued at more than $10,000. Taxpayers must also report income, whether earned in the United States, or offshore. The type and extent of individual taxpayers’ illegal offshore activity varies. In 2004, we reviewed OVCI to provide information to Congress on the characteristics of taxpayers who came forward regarding their noncompliant offshore activities, and to understand how those taxpayers became noncompliant. According to IRS data, OVCI applicants were a diverse group, for instance with wide variations in income and occupation. In each of the 3 years of OVCI we reviewed, at least 10 percent of the OVCI applicants had original adjusted gross incomes (AGI) of more than half a million dollars, while the median original AGI of applicants ranged from $39,000 in tax year 2001 to $52,000 in tax year 2000. Applicants listed over 200 occupations on their federal tax returns, including accountants, members of the clergy, builders, physicians, and teachers. Some OVCI applicants’ noncompliance appeared to be intentional, while others’ appeared to be inadvertent. Those applicants who had hidden money offshore through fairly elaborate schemes involving, for instance, multiple offshore bank accounts, appeared to be deliberately noncompliant. Other applicants appeared to have fallen into noncompliance inadvertently, for example, by inheriting money held in a foreign bank account and not realizing that income earned on the account had to be reported to IRS on their tax returns. OVCI applicants’ median adjustment to taxes due was relatively modest. For tax year 2001, the median additional taxes owed were $4,401, median penalties assessed were $657, and median interest owed was $301. However, other examples of offshore evasion have involved very substantial sums, complex structures and clear nefarious intent. For example, in 2006, Congress found several cases involving taxpayers with relatively large sums involved in abusive offshore transactions, including a U.S. businessman who, with the guidance of a prominent offshore promoter, moved from $400,000 to $500,000 in untaxed business income offshore. In another case, in 2006 a wealthy American pled guilty to tax evasion accomplished by creating offshore corporations and trusts, and then using a series of assignments, sales and transfers to place about $450 million in cash and stock offshore. According to the indictment, the businessman used these methods to evade more than $200 million in federal and District of Columbia income taxes. Limited transparency regarding U.S. persons’ financial activities in foreign jurisdictions contributes to the risk that some persons may use offshore entities to hide illegal activity from U.S. regulators and enforcement officials. For instance, individuals can sometimes use corporate entities to disguise ownership or income. Abusive offshore schemes are often accomplished through the use of limited liability corporations (LLC), limited liability partnerships and international business corporations, as well as trusts, foreign financial accounts, debit or credit cards, and other similar instruments. According to IRS, offshore schemes can be complex, often involving multiple layers and multiple transactions used to hide the true nature and ownership of the assets or income that the taxpayer is attempting to hide from IRS. In addition, creation of offshore entities and structures can be relatively easy and inexpensive. For example, establishing a Cayman Islands exempted company can be accomplished for less than $600 (not taking into account service providers’ fees), and the company is not required to maintain its register of shareholders in the Cayman Islands or hold an annual shareholders meeting. Other offshore jurisdictions provide similar services to those wishing to set up offshore entities. Another factor that makes it easier for individuals to avoid paying taxes through the use of offshore jurisdictions is that taxpayers’ compliance is largely based on voluntary self-reporting. When reporting is entirely voluntary, compliance can suffer. IRS has found that when there is little or no reporting of taxpayers’ income by third parties to taxpayers and IRS, taxpayers include less than half of the income on their tax returns. One way that taxpayers are required to self-report foreign holdings is through the Report of Foreign Bank and Financial Accounts (FBAR) form. Citizens, residents, or persons doing business in the United States with authority over a financial account or accounts in another country exceeding $10,000 in value at any time during the year are to report the account to the Department of the Treasury (Treasury). U.S. persons transferring assets to or receiving distributions from a foreign trust are required to report the activity to IRS on Form 3520, Annual Return to Report Transactions With Foreign Trusts and Receipt of Certain Foreign Gifts. From 2000 through 2007, the number of FBARs received by Treasury has increased by nearly 85 percent, according to IRS. In 2008, IRS also said that, despite the significant increase in filings, concern remains about the degree of reporting compliance for those who are required to file FBARs. Also in 2008, the U.S. Senate Joint Committee on Taxation (JCT) reported that three categories of U.S. persons are potentially not filing FBARs and Form 3520s as required by law: taxpayers who are unaware or confused about filing requirements, taxpayers who are concealing criminal activity and taxpayers who are structuring transactions to avoid triggering the filing requirements. Our 2004 review of applicants who came forward to declare offshore income under OVCI also suggested a high level of FBAR nonreporting, even by those individuals who reported all of their income to IRS. For instance, for each year covered by OVCI, more than half of the applicants had generally reported all of their income and paid taxes due—even on their offshore income—but had failed to disclose the existence of their foreign bank accounts as required by Treasury. Finally, financial advisors often facilitate abusive transactions by enabling taxpayers’ offshore schemes. We have reported that most possible offshore tax evasion cases are discovered through IRS’s investigations of promoters of offshore schemes. During our 2004 review of OVCI, we examined Web sites promoting offshore investments and found that most provided off-the-shelf offshore companies or package deals, including the ability to incorporate offshore within the next day by buying an off-the- shelf company at a cost of $1,500. These promoters provided taxpayers a way to quickly and easily move money offshore and repatriate it without reporting that money to IRS. Congress also has found promoters behind several offshore evasion schemes such as the Equity Development Group (EDG), an offshore promoter based in Dallas, that recruited clients through the Internet and helped them create offshore structures. With few resources and no employees, EDG enabled clients to move assets offshore, maintain control of them, obscure their ownership, and conceal their existence from family, courts, creditors and IRS and other government agencies. In another case, a Seattle-based securities firm, Quellos Group, LLC, designed, promoted, and implemented securities transactions to shelter over $2 billion in capital gains from U.S. taxes, relying in part on offshore secrecy to shield its workings from U.S. law enforcement. This scheme was estimated to cost the U.S. Treasury about $300 million in lost revenue. Large financial firms also have been found to have advised U.S. clients on the use of offshore structures to hide assets and evade U.S. taxes. For example, in 2008 the IRS announced that Liechtenstein Global Trust Group (LGT), a leading Liechtenstein financial institution, had assisted U.S. citizens in evading taxes. In another case, in June 2008, Bradley Birkenfeld, a former employee of Swiss bank UBS AG, pleaded guilty in federal district court to conspiring with an American billionaire real estate developer, Swiss bankers and his co-defendant, Mario Staggl, to help the developer evade paying $7.2 million in taxes by assisting in concealing $200 million of assets in Switzerland and Liechtenstein. Birkenfeld admitted that from 2001 through 2006 he routinely traveled to and had contacts within the United States to help wealthy Americans conceal their ownership of assets held offshore and evade paying taxes on the income generated from those assets. In February 2009 the Department of Justice announced that UBS entered into a deferred prosecution agreement for conspiring to defraud the U.S. government by helping U.S. citizens to conceal assets through UBS accounts held in the names of nominees and/or sham entities. In announcing the deferred prosecution agreement, the Department of Justice alleged that Swiss bankers routinely traveled to the United States to market Swiss bank secrecy to U.S. clients interested in attempting to evade U.S. income taxes. Court documents assert that, in 2004 alone, Swiss bankers allegedly traveled to the United States approximately 3,800 times to discuss their clients’ Swiss bank accounts. UBS agreed to pay $780 million in fines, penalties, interest and restitution for its actions. IRS has several initiatives that target offshore tax evasion, but tax evasion and crimes involving offshore entities are difficult to detect and to prosecute. We have reported that offshore activity presents challenges related to oversight and enforcement, such as issues involved in self- reporting, the complexity of offshore financial transactions and relationships among entities, the lengthy processes involved with completing offshore examinations, the lack of jurisdictional authority to pursue information, the specificity required by information-sharing agreements, and issues with third-party financial institution reporting. As noted earlier, individual U.S. taxpayers and corporations generally are required to self-report their foreign taxable income to IRS. Self-reporting is inherently unreliable, for several reasons. Because financial activity carried out in foreign jurisdictions often is not subject to third-party reporting requirements, in many cases persons who intend to evade U.S. taxes are better able to avoid detection. For example, foreign corporations with no trade or business in the United States are not generally required to report to IRS any dividend payments they make to shareholders, even if those payments go to U.S. taxpayers. Therefore, a U.S. shareholder could fail to report the dividend payment with little chance of IRS detection. In addition, when self-reporting does occur, the completeness and accuracy of reported information is not easily verified. In addition, the complexity of offshore financial transactions can complicate IRS investigation and examination efforts. Specifically, offshore schemes can involve multiple entities and accounts established in different jurisdictions in an attempt to conceal income and the identity of the beneficial owners. For instance, we have previously reported on offshore schemes involving “tiered” structures of foreign corporations and domestic and foreign trusts in jurisdictions that allowed individuals to hide taxable income or make false deductions, such as in the case of United States v. Taylor. The defendants in United States v. Taylor and United States v. Petersen pleaded guilty in U.S. District Court to crimes related to an illegal tax evasion scheme involving offshore entities. As part of the scheme, the defendants participated in establishing a “web” of domestic and offshore entities that was used to conceal the beneficial owners of assets, and to conduct fictitious business activity that created false business losses, and thus false tax deductions, for clients. Given the characteristics of offshore evasion, IRS examinations that include offshore tax issues for an individual can take much longer than other examinations. Specifically, our past work has shown that from 2002 through 2005, IRS examinations involving offshore tax evasion took a median of 500 more calendar days to develop and examine than other examinations. The amount of time required to complete offshore examinations is lengthy for several reasons, such as technical complexity and the difficulty of obtaining information from foreign sources. For instance, many abusive offshore transactions are identified through IRS examination of promoters, and IRS officials have said that it can take years to get a client list from a promoter and, even with a client list, there is still much work that IRS needs to do before the participants of the offshore schemes can be audited. Because of the 3-year statute of limitations on assessments, the additional time needed to complete an offshore examination means that IRS sometimes has to prematurely end offshore examinations and sometimes chooses not to open them at all, despite evidence of likely noncompliance. We said that to provide IRS with additional flexibility in combating offshore tax evasion schemes, Congress should make an exception to the 3-year civil statute of limitations assessment period for taxpayers involved in offshore financial activity. IRS agreed that this would be useful. In testimony before Congress, the Commissioner of Internal Revenue has said that in cases involving offshore bank and investment accounts in bank secrecy jurisdictions, it would be helpful for Congress to extend the time for assessing a tax liability with respect to offshore issues from 3 to 6 years. Legislation was introduced in 2007, but not enacted, to increase the statute of limitations from 3 to 6 years for examinations of returns that involve offshore activity in financial secrecy jurisdictions. At a more fundamental level, jurisdictional limitations also make it difficult for IRS to identify potential noncompliance associated with offshore activity. Money is mobile and once it has moved offshore, the U.S. government generally does not have the authority to require foreign governments or foreign financial institutions to help IRS collect tax on income generated from that money. In prior work we have reported that a Deputy Commissioner of IRS’s Large and Midsized Business Division said that a primary challenge related to U.S. persons’ uses of offshore jurisdictions is simply that when a foreign corporation is encountered or involved, IRS has difficulty pursuing beneficial ownership any further because of a lack of jurisdiction. IRS officials told us that IRS does not have jurisdiction over foreign entities whose incomes are not effectively connected with a trade or business in the United States. Thus, if a noncompliant U.S. person established a foreign entity to carry out non- U.S. business, it would be difficult for IRS to identify that person as the beneficial owner. In addition, while the U.S. government has useful information-sharing agreements in place to facilitate the exchange of information on possible noncompliance by U.S. persons with offshore jurisdictions, agreements involving the exchange of information on request generally require IRS to know a substantial amount about the noncompliance before other nations will provide information. For example, the U.S. government uses Tax Information Exchange Agreements (TIEA) as the dedicated channel for exchange of tax information, while Mutual Legal Assistance Treaties (MLAT) remain the channel for exchanging information for offenses involving nontax criminal violations. Nevertheless, the Commissioner of Internal Revenue recently said that in some instances the process to obtain names of account holders is inefficient, and IRS must rely on other legal and investigative techniques. As we have reported previously with regard to the use of these channels with the Cayman Islands government, neither TIEAs nor MLATs allow for “fishing expeditions,” or general inquiries about a large group of accounts or entities. Rather, as is standard with arrangements providing for exchange of information on request, each request must involve a particular target. For example, IRS cannot send a request for information on all corporations established in the Cayman Islands over the past year. The request must be specific enough to identify the taxpayer and the tax purpose for which the information is sought, as well as state the reasonable grounds for believing that the information is in the territory of the other party. One program IRS established to help ensure compliance when offshore transactions occur is the QI program. Under the QI program, foreign financial institutions voluntarily report to IRS income earned and taxes withheld on U.S. source income, providing some assurance that taxes on U.S. source income sent offshore are properly withheld and income is properly reported. However, significant gaps exist in the information available to IRS about the owners of offshore accounts. Perhaps most important, a low percentage of U.S. source income sent offshore flows through QIs. For tax year 2003, about 12.5 percent of $293 billion in U.S. income flowed through QIs. The rest, or about $256 billion, flowed through U.S. withholding agents. While QIs are required to verify account owners’ identities, U.S. withholding agents can accept owners’ self-certification of their identities at face value. Reliance on self-certification leads to a greater potential for improper withholding because of misinformation or fraud. IRS does not measure the extent to which U.S. withholding agents rely on self-certifications. In our 2007 report we recommended that IRS perform this measurement and use these data in its compliance efforts. For instance, IRS could increase oversight for U.S. withholding agents who primarily rely on self- certifications in determining whether withholding should occur. IRS has taken some steps to measure such reliance, but IRS’s approach thus far has not been systemic and also does not address improving the efficiency of its compliance efforts. The previously discussed case of Swiss bank UBS provides a stark example of the QI program’s vulnerabilities. In February 2009, UBS entered into a deferred prosecution agreement with Justice and agreed to pay $780 million in fines, penalties, interest and restitution for defrauding the U.S. government by helping United States taxpayers hide assets through UBS accounts held in the names of nominees and/or sham entities. UBS entered into a QI program agreement with IRS in 2001, and was required to report U.S. citizens’ income to the IRS during the time that it conspired to defraud the U.S. government. We also recommended that IRS require the QI program’s external auditors report on any indications of fraud or illegal acts that could significantly affect the results of their reviews of the QIs’ compliance with their agreements. However, it should be noted that we can not say that having this reporting requirement in place would have forestalled UBS’s efforts to defraud the United States or detected them earlier. IRS has proposed some amendments to the QI program that would somewhat enhance QI auditors’ responsibilities in this area. In our 2007 report on the QI program, we also recommended that IRS determine why U.S. withholding agents and QIs report billions of dollars in funds flowing to unknown jurisdictions and unidentified recipients, and recover any withholding taxes that should have been paid. IRS has taken steps toward implementing this recommendation. We also recommended that IRS modify QI contracts to require electronic filing of forms and invest the funds necessary to perfect the data. IRS is including an application for filing information returns electronically in all QI applications and renewals but has not measured whether including the forms in the applications has had an impact on the number electronic filers. In our 2004 review of OVCI, we noted that the diverse types of individuals involved in offshore noncompliance may require multiple compliance strategies on the part of IRS. The limited transparency involved in U.S. persons’ activities in offshore jurisdictions also presents several challenges to IRS and Treasury. As Commissioner of Internal Revenue Shulman recently commented, “There is general agreement in the tax administration community that there is no ‘silver bullet’ or one strategy that will alone solve the problems of offshore tax avoidance.” Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other members of the committee may have at this time. For further information regarding this testimony, please contact Michael Brostek, Director, Strategic Issues, on (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include David Lewis, Assistant Director; S. Mike Davis; Jonda VanPelt; Elwood White; and A.J. Stephens. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Much offshore financial activity by individual U.S. taxpayers is not illegal, but numerous schemes have been devised to hide the true ownership of funds held offshore and income moving between the United States and offshore jurisdictions. In recent years, GAO has reported on several aspects of offshore financial activity and the tax compliance and tax administration challenges such activity raises for the Internal Revenue Service (IRS). To assist the Congress in understanding these issues and to support Congress's consideration of possible legislative changes, GAO was asked to summarize its recent work describing individual offshore tax noncompliance, factors that enable offshore noncompliance, and the challenges that U.S. taxpayers' financial activity in offshore jurisdictions pose for IRS. This statement was primarily drawn from previously issued GAO products. Individual U.S. taxpayers engage in financial activity involving offshore jurisdictions for a variety of reasons. When they do, they are obligated to report any income earned in the course of those activities. They are also required to report when they control more than $10,000 in assets outside of the country. However, much of this required reporting depends on taxpayers knowing their reporting obligations and voluntarily complying. Some taxpayers do not comply with their income and asset reporting obligations. Limited transparency, the relative ease and low cost of establishing offshore entities, and an array of financial advisors can facilitate tax evasion. IRS's Qualified Intermediary program has helped IRS obtain information about U.S. taxpayers' offshore financial activity, but as the recent case against the large Swiss bank UBS AG underscores, the program alone is insufficient to address all offshore tax evasion. Earlier, GAO had recommended changes to improve QI reporting, make better use of reports, and enhance assurance that any fraudulent QI activity is detected. IRS examinations that include offshore tax issues can take much longer than other examinations. GAO's past work has shown that from 2002 through 2005, IRS examinations involving offshore tax evasion took a median of 500 more calendar days to develop and examine than other examinations. The amount of time required to complete offshore examinations is lengthy for several reasons, such as technical complexity and the difficulty of obtaining information from foreign sources. However, the same statute of limitations preventing IRS from assessing taxes or penalties more than 3 years after a return is filed applies to both domestic and offshore financial activity. The additional time needed to complete an offshore examination means that IRS sometimes has to prematurely end offshore examinations and sometimes chooses not to open them at all, despite evidence of likely noncompliance. In testimony before Congress, the Commissioner of Internal Revenue has said that in cases involving offshore bank and investment accounts in bank secrecy jurisdictions, it would be helpful for Congress to extend the time to assess a tax liability with respect to offshore issues from 3 to 6 years. |
The federal government lacks a clear picture of the volume of discrimination and whistleblowing reprisal cases involving federal employees. The lack of a complete accounting of cases is in part a by- product of the complexity of the redress system for federal employees and the different ways in which case data are reported. The NoFEAR Act would require agencies to report the number of discrimination and whistleblower reprisal cases. Executive branch civil servants are afforded opportunities for redress of complaints of discrimination or retaliation for whistleblowing at three levels: first, within their employing agencies; next, at one of the administrative bodies with sometimes overlapping jurisdictions that investigate or adjudicate their complaints; and, finally, in the federal courts. Where discrimination is alleged, the Equal Employment Opportunity Commission (EEOC) hears complaints employees file with their agencies and reviews agencies’ decisions on these complaints. In a case in which an employee alleges that discrimination was the motive for serious personnel actions, such as dismissal or suspension for more than 14 days, the employee can request a hearing before the Merit Systems Protection Board (MSPB). MSPB’s decisions on such cases can then be reviewed by EEOC. For federal employees who believe that they have been subject to whistleblower reprisal, the Office of Special Counsel (OSC) will investigate their complaints and seek corrective action when a complaint is valid. When agencies fail to take corrective action, OSC or the employee can take the case to MSPB for resolution. Alternatively, an employee can file a whistleblower reprisal complaint directly with MSPB, if the personnel action taken against the person is itself appealable to MSPB. In addition, under certain environmental laws and the Energy Reorganization Act, employees can ask the Department of Labor (DOL) and the Nuclear Regulatory Commission to investigate their complaints. Employees who belong to collective bargaining units represented by unions can also file grievances over discrimination and reprisal allegations under the terms of collective bargaining agreements. In those situations, the employee must choose to seek relief either under the statutory procedure discussed above or under the negotiated grievance procedure, but not both. If an employee files a grievance alleging discrimination under the negotiated grievance procedure, the Federal Labor Relations Authority (FLRA) can review any resulting arbitrator’s decision. A grievant may appeal the final decision of the agency, the arbitrator, or FLRA to EEOC. A complainant dissatisfied with the outcome of his or her whistleblower reprisal case can file an appeal to have the case reviewed by a federal appeals court. An employee with a discrimination complaint who is dissatisfied with a decision by MSPB or EEOC, however, can file a lawsuit in a federal district court and seek a de novo trial. With reporting requirements and procedures varying among the administrative agencies and the courts, data on the number of discrimination and whistleblower reprisal cases are not readily available to form a clear and reliable picture of overall case activity. However, available data do provide some insights about caseloads and trends. These data and our prior work show that most discrimination and whistleblower reprisal cases involving federal employees are handled under EEOC, MSPB, and OSC processes, with complaints filed under EEOC’s process by far accounting for the largest volume of cases. In fiscal year 2000, federal employees filed 24,524 discrimination complaints against their agencies under EEOC’s process. In fiscal year 2000, MSPB received 991 appeals of personnel actions that alleged discrimination. MSPB also received 414 appeals alleging whistleblower reprisal in fiscal year 2000, while OSC received 773 complaints of whistleblower reprisal. There are two caveats I need to offer about these statistics. The first is that because of jurisdictional overlap among the three agencies, the statistics cannot be added together to give a total number of discrimination and whistleblower reprisal complaints. The second caveat is that in our past work, we found some problems with the reliability and accuracy of data reported by EEOC. Notwithstanding these caveats, the available data also show that the last decade saw an overall increase in the number of cases, particularly discrimination complaints under EEOC’s jurisdiction. The number of cases under EEOC’s jurisdiction, which stood at 17,696 in fiscal year 1991, showed a fairly steady upward trend, peaking at 28,947 in fiscal year 1997. Although the number of new cases each year has declined since fiscal year 1997, the number of cases in fiscal year 2000—24,524—is almost 40 percent greater than in fiscal year 1991, despite a smaller federal workforce. Caseload data can be a starting point for agency managers to understand the nature and scope of issues in the workplace involving discrimination, reprisal, and other conflicts and problems, and can help in developing strategies for dealing with these issues. However, caseload data can only be a starting point because they obviously do not capture any discrimination or reprisal that is not reported. As I discussed above, most discrimination complaints are handled within the process under EEOC’s jurisdiction. However, we have found in our past work that EEOC does not collect data in a way needed by decisionmakers and program managers to discern trends in workplace issues represented by discrimination complaints, understand the issues underlying these complaints, and plan corrective actions. Although EEOC has initiatives under way to deal with data shortcomings, relevant information is still lacking on such matters as (1) the statutory basis (e.g., race, sex, or disability discrimination) under which employees filed complaints and (2) the kinds of issues, such as nonselection for promotion or harassment, that were cited in the complaints. The NoFEAR Act would also require agencies to report the status or disposition of discrimination and whistleblower reprisal cases. The available data show that most allegations of discrimination and reprisal for whistleblowing are dismissed, withdrawn by the complainant, or closed without a finding of discrimination. However, many other cases are settled. Of the discrimination cases within EEOC’s jurisdiction, 5,794 (21.3 percent) of the 27,176 cases were closed through a settlement. At MSPB, 279 (28.5 percent) of the 980 appeals that alleged discrimination were settled. With regard to the 440 whistleblower cases at MSPB, 93 (21 percent) were settled. While settlements are made when evidence may point to discrimination or reprisal, at other times an agency may make a business decision and settle for a variety of reasons, including that pursuing a case may be too costly, even if the agency believes it would have ultimately prevailed. Finally, in some cases, discrimination or reprisal is found. Of the 27,176 cases within the discrimination complaint process under EEOC’s jurisdiction that were closed in fiscal year 2000, 325 (about 1 percent) contained a finding of discrimination. At MSPB, of the 980 cases alleging discrimination, discrimination was found in 4 (four-tenths of a percent). In 440 cases alleging whistleblower reprisal it reviewed, MSPB found that a prohibited personnel practice occurred in 2 (five-tenths of a percent) of the cases. At OSC, favorable actions were obtained in 47 of 671 (7 percent) whistleblower reprisal matters closed in fiscal year 2000. It is important to note that agencies have responded to the rise in the number of complaints and the costs associated with them by adopting alternative means of dispute resolution (ADR). Using ADR processes, such as mediation, agencies intervene in the early stages of conflicts in an attempt to resolve or settle them before positions harden, workplace relationships deteriorate, and resolution becomes more difficult and costly. A premise behind a requirement EEOC put in place in 1999 that agencies make ADR available was that the complaint system was burdened with many cases that reflected basic workplace communications problems and not necessarily discrimination. Some agencies, most notably the Postal Service, have reported reductions in discrimination complaint caseloads through the use of ADR. In fact the Postal Service, from fiscal year 1997 through fiscal year 2000, saw a 26 percent decline in the number of discrimination complaints that the agency largely attributes to its mediation program. Because ADR prevents some disputes from rising to formal complaints, a reduction in the number of formal complaints should not necessarily be looked at as a reduction in workplace conflict, but it can indicate that an agency is more effectively dealing with workplace conflict. Meaningful data along the lines I discussed earlier are useful in helping to measure an agency’s success in adhering to merit system principles, treating its people in a fair and equitable way, and achieving a diverse and inclusive workforce. We encourage such assessments of agencies’ workplaces and human capital systems to help them align their people policies to support organizational performance goals. In addition, data foster transparency, which in turn provides an incentive to improve performance and enhance the image of the agency in the eyes of both its employees and the public. Another possible means of promoting accountability might be to have organizations bear more fully the costs of payments to complainants and their lawyers made in resolving cases of discrimination and reprisal for whistleblowing. Currently, federal agencies do not always bear the costs of settlements or judgments in discrimination or reprisal complaints. Agencies will pay these costs when a complaint is resolved by administrative procedures, such as the discrimination complaint process. However, when a lawsuit is filed, any subsequent monetary relief is generally paid by the Judgment Fund. (One exception is the Postal Service, which is responsible for settlement and judgment costs.) The Judgment Fund provides a permanent indefinite appropriation to pay settlements and judgments against the federal government. Congress created the Judgment Fund to avoid the need for a specific congressional appropriation for settlement and judgment costs and to allow for prompter payments. The NoFEAR Act would require that agencies reimburse the Judgment Fund for payments made for discrimination and whistleblower reprisal cases. Table 1 below shows payments made by agencies for discrimination complaint cases processed under administrative procedures within EEOC’s jurisdiction and payments from the Judgment Fund for employment discrimination lawsuits (these were the only readily available data). In addition to attorney fees and expenses, payments made to complainants include back pay, compensatory damages, and lump sum payments. As the table shows, agencies made payments totaling about $26 million in fiscal year 2000 for discrimination complaint settlements and judgments. At the same time, agencies were relieved of paying almost $43 million in cases because of the existence of the Judgment Fund. The availability of the Judgment Fund to pay settlement and judgment costs has brought about debate with regard to agency accountability. On one hand, it could be argued that the Judgment Fund provides a safety net to help ensure that agency operations are not disrupted in the event of a large financial settlement or judgment. It can also be argued, however, that the fund discourages accountability by being a disincentive to agencies to resolve matters promptly in the administrative processes; by not pursuing resolution, an agency could shift the cost of resolution from its budget to the Judgment Fund and escape the scrutiny that would accompany a request for a supplemental appropriation. Congress dealt with a somewhat similar situation when it enacted the Contract Disputes Act in 1978, which requires agencies to either reimburse the Judgment Fund for judgments awarded in contract claims from available appropriations or to obtain an additional appropriation for such purposes. This provision was intended to counter the incentive for an agency to avoid settling and prolong litigation in order to have the final judgment against the agency occur in court. In reconciling these viewpoints on financial accountability, Congress will need to balance accountability with the needs of the public to receive expected services. Certainly, just as it is important for agencies to be held accountable in cases where discrimination or reprisal for whistleblowing is found, so must individuals be held accountable for engaging in such misconduct. The NoFEAR Act would require agencies to report the number of employees disciplined for discrimination, retaliation, or harassment.Published statistical data can be important for agencies to send a message to their employees that individuals will be held accountable for their actions in cases involving discrimination, retaliation, or harassment. Although we have not done any formal work in this area, we know of two agencies—the Department of Agriculture and the Internal Revenue Service (IRS)—that systematically review outcomes of discrimination cases to determine if any individual should be disciplined. Since January 1998, Agriculture has been reviewing cases in which discrimination was found or in which there were settlement agreements to determine if an employee should be disciplined for discrimination or misconduct related to civil rights. An Agriculture official said that a formal policy on accountability and discipline in civil rights-related cases was currently pending approval. Since July 1998, IRS has been reviewing cases in which discrimination was found or in which there were settlement agreements to determine if the discrimination was intentional. Where an employee has been found to have discriminated against another employee of IRS (or a taxpayer or a taxpayer’s representative), the IRS Restructuring and Reform Act of 1998 provides that the individual be terminated for his or her actions. Only the IRS Commissioner has the authority to mitigate termination to a lesser penalty. I would also add that besides traditional forms of discipline—such as termination, suspension, or letter of reprimand—employees can be held accountable for their behavior through an agency’s performance management system. For example, an employee whose behavior does not rise to the level of discrimination but otherwise demonstrates insensitivity or poor communication skills can and should have that fact reflected in his or her performance appraisal. The NoFEAR Act provides that agencies notify employees of the rights and protections available to them under the antidiscrimination and whistleblower statutes in writing and post this information on their Internet sites. This provision reinforces existing requirements that employees be notified of rights and remedies concerning discrimination and whistleblower protection. There has been a concern that federal employees were not sufficiently aware of their protections, particularly about protections from reprisal for whistleblowing, and without sufficient knowledge of these protections, may not come forward to report misconduct or inefficiencies for fear of reprisal. We first pointed this out in a report issued in 1992. Now, almost a decade later, OSC has identified “widespread ignorance” in the federal workforce concerning OSC and the laws it enforces, even though agencies are to inform their employees of these protections. According to OSC’s fiscal year 2000 Performance Report, responses to an OSC survey indicated that few federal agencies have comprehensive education programs for their employees and mangers. To help ensure economical, efficient, and effective delivery of services for the benefit of the American people, allegations of discrimination and reprisal for whistleblowing in the federal workplace must be dealt with in a fair, equitable, and timely manner. Doing so requires, first, reliable and complete reporting of data as a starting point to understand the nature and scope of issues in the workplace involving discrimination, reprisal, and other conflicts and problems, and to help develop strategies for dealing with these issues. Second, agencies and individuals must be accountable for their actions. Third, the workforce must be aware of laws prohibiting discrimination and whistleblower reprisal to deter this kind of conduct but also so that they know what course of action they can take when misconduct has occurred. | Federal employees who report waste, fraud, and abuse shouldn't have to fear discrimination and retaliation. Despite laws designed to protect whistleblowers, some have experienced or believed that they have experienced reprisals. Proposed legislation--the Notification and Federal Employee Antidiscrimination and Retaliation Act of 2001--would provide additional protections for federal employees and would provide important data to decisionmakers. First, the act would require agencies to report the number of discrimination and whistleblower reprisal cases. Because of a lack of data, the federal government currently doesn't have a clear picture of the volume of discrimination and whistleblowing reprisal cases involving federal employees. Such data could be a starting point for agency managers to understand the nature and scope of issues in the workplace involving reprisals and discrimination. Second, the act would make agencies and their leaders accountable for providing fair and equitable workplaces. In addition, individuals would be held accountable for their actions in cases in which discrimination has occurred. Finally, the act would require agencies to notify employees in writing of their rights and protections. This provision reinforces existing requirements that employees be notified of the rights and remedies concerning discrimination and whistleblower protection. |
Federal agencies owned more than 446,000 non-tactical vehicles in fiscal year 2015, according to the Federal Fleet Report. Five departments owned approximately 89 percent of these vehicles, as shown in Table 1. Agencies acquire vehicles through purchase or lease and are responsible for making decisions about the number and type of vehicles they need. Agencies obtain almost all of their vehicles through GSA. Specifically, by purchasing through GSA’s Vehicle Purchasing program or leasing a vehicle through GSA Fleet. GSA is a mandatory source for purchase of new vehicles for federal executive agencies and other eligible users. According to federal guidelines, when deciding what vehicle to buy, agencies should purchase vehicles that meet their mission and represent the best value by considering price, when the vehicle can be delivered, fuel economy, lifecycle cost, past performance, and other considerations. GSA Vehicle Purchasing offers an array of non-tactical vehicles and options at a savings from the manufacturer’s invoice price, including traditional vehicles (such as pickup trucks and sedans) and specialized vehicles (such as firetrucks and utility trucks). GSA develops annual vehicle standards that establish the types and sizes of vehicles and general equipment it will offer through the GSA Vehicle Purchasing program. GSA also maintains an on-line procurement tool—known as AutoChoice—that allows the purchasing agency officials to view the standard vehicle models, choose equipment options, view side-by-side comparisons of vehicle models from different manufacturers, place their orders, and track delivery. The order is then recorded in a GSA procurement database called ROADS. If GSA cannot meet an agency’s needs for a specific vehicle, agencies can apply to GSA for a waiver. If approved, agencies may then purchase a vehicle directly from a non-GSA source, such as a dealership. In some cases, agencies are required to or directed to acquire vehicles with a lower environmental footprint. For example, the Energy Policy Act of 1992 requires that 75 percent of agencies’ light-duty vehicle acquisitions be alternative-fuel vehicles (AFVs). In addition, the Energy Independence and Security Act of 2007 prohibits agencies from acquiring any light-duty motor vehicle or medium-duty passenger vehicle that is not a low greenhouse-gas emitting vehicle. Executive Order 13693, issued in March 2015, directed that agencies plan for zero emission vehicles or plug-in hybrid vehicles to make up 20 percent of all new agency passenger vehicle acquisitions by December 31, 2020, and 50 percent of all new agency passenger vehicle acquisitions by December 31, 2025. Agencies are responsible for managing their vehicles’ utilization in a manner that allows them to fulfill their missions and meet various federal guidelines and directives, such as by completing a vehicle allocation methodology (VAM). The VAM process is designed to help agencies identify the optimal size and composition of their respective fleets. Under GSA guidance, agencies are directed to complete a VAM survey, which measures the usage of each vehicle in the fleet, at least every 5 years. GSA guidance further advises agencies on how to complete the VAM process. For example, agencies are instructed to have standards for the minimum amount of use of a vehicle—called utilization criteria—that are appropriate for their missions. Agencies define their own utilization criteria—which may include mileage, number of trips, or other metrics— and decide which vehicles, if any, to eliminate from their fleets. Federal agencies determine when to replace or dispose of vehicles based on federal vehicle replacement standards and their mission and program needs. GSA has established minimum standards in federal regulations that call for agencies to retain agency-owned vehicles for at least a requisite number of years or miles before replacing them. For example, an agency should keep a sedan or station wagon for at least 3 years or 60,000 miles, whichever occurs first. It may keep the vehicle beyond the minimum years and miles if the vehicle can be operated without excessive maintenance or substantial reduction in resale value. Conversely, the agency may replace a vehicle that has not yet met the threshold if the vehicle needs body or mechanical repairs that exceed the fair market value of the vehicle. Federal regulations allow agencies to dispose of vehicles they no longer need and have been declared excess, even if the vehicles have not met the minimum replacement standards for years and mileage. Agencies may dispose of vehicles by exchange, sale, transfer to another agency, or donation. Based on our analysis of ROADS data, federal agencies spent more than $1.6 billion from fiscal years 2011 through 2015 to purchase a wide variety of vehicles though GSA. Included in the $1.6 billion is approximately $2.5 million that agencies spent on vehicle options such as power seats and remote keyless start. In less than 1 percent of purchases during this time frame, agencies received approval to acquire a vehicle from a source other than GSA. In those cases, agencies purchased a variety of vehicles that included ambulances and modified passenger vehicles. From fiscal year 2011 through fiscal year 2015, federal agencies purchased 64,522 passenger vehicles and light trucks through GSA at a total cost of over $1.6 billion. Agencies used these vehicles to meet a wide variety of mission needs, including supporting operations on the U.S. border, transporting veterans, accessing remote locations, and hauling repair equipment, among other functions. The annual number of new passenger vehicle and light trucks purchased decreased from approximately 14,400 in fiscal year 2011 to approximately 13,300 in fiscal year 2015. See appendix II for information on selected agencies’ procedures for purchasing vehicles. Five departments (DHS, the Department of Justice, USDA, DOD, and the Department of the Interior) purchased 90 percent of the vehicles purchased through GSA during this 5-year time period, and spent a comparable percentage of the associated funds. (see fig. 1). The average purchase prices for passenger vehicles and light trucks among these five departments were relatively comparable, ranging from $24,163 to $28,101. Similarly, the average price for such vehicles purchased by other federal agencies in our analysis was $26,107. The most expensive of these vehicles was a cargo van purchased by the Department of Justice for $158,191 in fiscal year 2012, for use by the FBI. The least expensive vehicle was purchased for $11,855 in fiscal year 2011 by the Department of Agriculture’s Forest Service. Federal agencies purchased a variety of passenger vehicles and light trucks during this time period. Relatively large vehicles, such as pickup trucks, made up the majority of acquisitions and had higher average costs than sedans. For example, of the 64,522 vehicles purchased from fiscal years 2011 through 2015, four-by-four (4x4) pickup trucks and 4x4 sport utility vehicles (SUV) accounted for more than half of the purchases, while sedans accounted for about 15 percent. On average, these 4x4 SUVs cost approximately $7,600 more than a sedan, while 4x4 pickup trucks cost approximately $5,000 more (see table 2). According to available data, at least 52 percent of the passenger vehicles and light trucks purchased during fiscal years 2013 through 2015 were capable of running on alternative fuel. Fuel type information was available for approximately 83 percent of vehicle purchases during that time. According to GSA officials, the fuel type for a vehicle is not reported in the purchase data if manufacturers do not voluntarily specify the fuel type. Manufacturers reported a fuel type for fewer than 10 percent of vehicle purchases in fiscal years 2011 and 2012. As previously discussed, a variety of laws and directives instruct agencies to increase their acquisition of low-greenhouse-gas-emitting, hybrid, or zero-emissions vehicles. Agencies purchased vehicles during fiscal years 2011 through 2015 for locations throughout the continental United States and other areas. Some concentrations of spending were for vehicles delivered in and around the Washington, DC, capital region (see fig. 2). Another concentration of spending was for vehicles delivered in states such as Texas and California. Each year, GSA publishes standards for vehicles, including what features will be included with particular base models. When making a purchase, agencies may change these standard vehicle features by selecting “options.” According to federal regulations, when agencies opt for additional systems or equipment to be added to vehicle purchases, these systems and equipment should be selected for purposes related to overall safety, efficiency, economy, and suitability (i.e., mission) of the vehicle. Based on our analysis of ROADS data, agencies added approximately 350 different types of options to passenger vehicles and light trucks purchased from fiscal year 2011 through fiscal year 2015. In approximately 41 percent of the instances that an agency added an option to a vehicle, the option increased the vehicle cost. In approximately 45 percent of instances, adding an option did not change the cost. In approximately 14 percent of instances, the selection resulted in a cost reduction. In analyzing these options, we were not able to determine if six of these types of options were related to safety, efficiency, economy, suitability, or administrative functions. These six option types included power seats, video entertainment systems, and heated or leather seats, among others (see table 3). Agencies added at least one of these six options to 7,344 vehicles (approximately 11.4 percent of the passenger vehicles and light trucks purchased through GSA during fiscal years 2011 to 2015) at a total cost of over $2.5 million. In some cases, agencies added multiple options to one vehicle. While these six options accounted for approximately 1.9 percent of all instances of options selected, they accounted for approximately 3.4 percent of the total cost of options. GSA does not determine for what purpose an agency may select a particular option. GSA officials discussed some instances of when these six options may have been related to the agency’s mission or could be considered a safety feature. For example, according to GSA officials, remote keyless start could be a safety feature when a vehicle is operated in an extreme climate. From fiscal years 2011 through 2015, agencies submitted 102 waiver requests to GSA, requesting permission to purchase a total of approximately 550 vehicles through non-GSA sources (see table 4). According to GSA officials, vehicles purchased with a waiver account for less than 1 percent of annual purchases in any given year. DOD submitted almost 40 percent of the 102 waiver requests. Cumulatively, the five departments—DOD, Interior, DHS, Veterans Affairs (VA), and the Department of Justice (DOJ)—submitted approximately 80 percent of the requests. In reviewing these 102 waiver requests, GSA: approved 56 (approximately 55 percent), denied 32 (approximately 31 percent), and did not ultimately process the remaining 14 (approximately 14 percent), in some cases, according to GSA officials, directing agencies to other services or letting them know a waiver was not required. We selected 17 approved waivers from fiscal years 2013 through 2015 for additional review. For these 17 approved waivers, we found agencies purchased vehicles ranging from ambulances to passenger vehicles to dump trucks, with an average cost of approximately $117,000. See appendix III for more details on the vehicles purchased with these 17 waivers. Some agency examples include: Army purchased a truck with a number of upgrades for $167,427, to be used for recruiting purposes, according to officials (see figure 3). Army officials noted that the upgrades contributed to its success as a recruiting tool, and that the truck was purchased as a replacement for six Hummers. DHS purchased a pair of leather-appointed Chevrolet Suburbans for approximately $67,000 each to transport the CBP Commissioner and dignitaries. VA purchased a minivan that was adapted to allow injured veterans to operate it, at a total cost of $50,515 (See fig. 4). According to federal standards for internal control, management should design control activities, such as policies and procedures, to achieve objectives and respond to risks. DHS, USDA, and Navy have policies aimed at achieving the objective of determining if vehicles are utilized and mitigating the risk of retaining vehicles that are not needed. However, as discussed below, we found that two agencies—CBP and NRCS—did not follow their respective departments’ policies for assessing vehicle utilization. Specifically, CBP did not assess the utilization of vehicles that fell below DHS’s mileage minimums, and NRCS did not use USDA’s utilization criteria or annually assess vehicles’ utilization. As a result, these two agencies could not determine if 2,441 of the 12,175 vehicles we selected (20 percent) were fully utilized. Cumulatively, these two agencies incurred an estimated $13.5 million in depreciation and maintenance costs for these vehicles during fiscal year 2015 (see table 5). While these estimated costs might not be equal to the cost savings if all vehicles with undetermined utilization were determined to be underutilized and eliminated, the amount provides insight into the potential scope of the issue. Based on our review of policies and agency-provided data, we found that the Navy uses criteria and justification processes to determine if a vehicle is utilized and was able to determine that all of the 3,652 vehicles we selected for our review were utilized. DHS’s fleet policy requires that agencies determine if their vehicles are justified. DHS defines a fully justified vehicle as one that either (1) travels a minimum number of miles per year (12,000 miles for sedans and 10,000 miles for light trucks), (2) meets alternative vehicle utilization criteria developed by the agency to reflect their mission needs, or (3) has an individual written justification if the vehicle did not meet DHS’s minimum mileage criteria or any of the alternative criteria developed by the agency. Of the 2,300 CBP vehicles that we examined, 1,862—81 percent—either did not achieve the DHS mileage utilization criteria or did not have sufficiently accurate mileage data to determine if the vehicle met the DHS mileage minimums. CBP officials told us that DHS’s mileage criteria are not always an appropriate utilization metric for their diverse fleet. But CBP has not developed its own alternative criteria to determine if the vehicles that did not meet the DHS mileage criteria are utilized. While DHS policy instructs agencies to individually justify vehicles that did not meet DHS’s or agency-developed utilization criteria, CBP officials could not provide justifications for these 1,862 vehicles, and one official stated that CBP does not develop such justifications. CBP incurred an estimated $12.7 million in maintenance and depreciation costs for these vehicles during fiscal year 2015. Because CBP did not determine if these vehicles are utilized, some of this cost may have been for vehicles that the agency did not need. CBP officials explained that it would not be efficient or cost-effective to manually collect the data needed to establish criteria appropriate for CBP, such as “engine run time.” Furthermore, even if criteria were to be established, the lack of readily available data would make it difficult to measure vehicle performance against such criteria. However, officials stated that CBP has begun to install “telematics” devices in many of its vehicles. These devices can measure and transmit data on the vehicle’s use. CBP officials plan to have the devices installed in approximately 60 percent of its fleet by March 2017. Officials reported that once these devices are fully deployed, a variety of new data points may become available, including engine hours and idle time. Officials stated that they have begun collecting data from the telematics devices installed to date to support the eventual development of appropriate utilization criteria; however, they have not yet developed a specific plan that outlines how they will use the data to develop appropriate utilization criteria and evaluate vehicles against those criteria. Officials also reported that while they intend to eventually install telematics on the remaining 40 percent of CBP’s fleet, there is currently no funding plan or timeline to do so, and CBP does not have a specific plan that details how it will assess the utilization of vehicles not equipped with telematics. CBP officials stated that given the large number of vehicles that did not meet the DHS mileage criteria, it was too difficult to develop individual justifications. If CBP developed utilization criteria that were appropriate for a large percentage of their vehicles’ missions, the number of vehicles requiring individual justifications could substantially decrease. In turn, this decrease in vehicles needing individual justification could facilitate CBP’s compliance with DHS’s policy that requires justifications for vehicles that do not meet utilization criteria. A 2012 USDA policy memo requires that all vehicles be utilized. The memo specifies utilization criteria as either a function of mileage or days used, as shown in the table 6 below, although agencies can request changes to these requirements: In addition to setting forth utilization criteria, the policy memo allows agencies to individually justify vehicles’ falling below the utilization minimums (for example, law enforcement vehicles or vehicles with a unique mission). Furthermore, the memo outlines USDA’s expectation that NRCS and other agencies will annually identify vehicles that did not meet the utilization criteria or have an individual justification documenting why the vehicle should be retained. NRCS did not follow the USDA policies on annual utilization and justification because key officials were not aware of these policies. Specifically, all of the USDA and NRCS fleet managers we spoke to were unaware of this policy memo, which officials from USDA said had not been re-circulated since 2012. According to USDA officials, the utilization policy and the requirement for annual justifications were not widely discussed or shared. Thus, NRCS did not apply USDA’s utilization criteria to its vehicles to determine if the vehicles were utilized. In addition, NRCS does not have a process for annually justifying its vehicles. NRCS conducted a VAM survey, which agencies must conduct at least every 5 years, in fiscal year 2015 (the year covered in our review). As a part of that survey, NRCS developed individual justifications for its vehicles. As a result, NRCS determined that 91 percent of the 6,223 vehicles we selected for this review were utilized. For the remaining 9 percent of the selected vehicles (579), NRCS was unable to determine if the vehicles were utilized. While the VAM survey provided justifications for fiscal year 2015, NRCS officials reported that they did not plan to conduct an annual VAM survey because the surveys cover every vehicle in the fleet and are resource intensive. If NRCS followed USDA policies on annual utilization and justification, the annual process would be less resource intensive than the VAM because the annual process would not need to cover vehicles that met USDA’s utilization criteria. NRCS retained all of the vehicles for which it could not determine if the vehicles were utilized and incurred approximately $750,000 in maintenance and depreciation costs in fiscal year 2015 for the 579 vehicles that were retained. DOD has established criteria for minimum annual mileage (either 7,500 or 10,000 annual miles for trucks—depending on vehicle characteristics— and 12,000 annual miles for sedans). According to Navy officials, Navy has additional criteria to assess the utilization of each vehicle based on mission needs and has a process to annually review the usage and justification of one third of its fleet to determine if the vehicles are utilized and still needed. Navy reported all 3,652 vehicles we selected for review met the DOD mileage criteria or had an individual justification in fiscal year 2015. Navy has a process to individually review its entire fleet within a 3-year cycle even if some vehicles meet the DOD mileage requirements. These triannual justifications—known as the TRIO process—are considered valid by the Navy until the vehicles are reassessed again in 3 years. Through this process, each vehicle has its own requirement criteria and justification for retention. According to Navy officials, the TRIO process will be replaced by annual reviews within the next few years. During panel discussions and individual interviews, fleet management officials from three selected agencies—CBP, Navy, and NRCS—identified several key challenges to managing the costs of their fleet. These challenges included alternative fuel vehicle requirements, fragmented data systems, and budget constraints. Agency officials from CBP, Navy, and NRCS reported that the requirements for purchasing alternative fuel vehicles and using alternative fuel make it challenging to manage fleet costs. Some officials reported that complying with these requirements sometimes involves purchasing more expensive vehicle models. According to data from GSA’s 2016 Model AFV Guide, in some cases acquisition costs for alternative fuel vehicles were substantially more expensive than the gasoline-powered model. For example, an electric-only sub-compact car costs approximately 82 percent more than the standard gasoline-only model. Similarly, the plug-in electric hybrid version of a subcompact car costs approximately 99 percent more than the gasoline-only model. However, in other cases, acquisition costs for alternative fuel vehicles were the same as those of their gasoline-powered counterpart (see table 7). Selected agency fleet management officials also reported that complying with AFV requirements involves installing and maintaining the infrastructure required for AFVs, a process that can be costly. For example, supporting AFVs can require electric charging stations, which agency officials reported to be costly and difficult to manage, given the outdated condition of some agencies’ facilities and the need to ensure National Electric Code standards compliance. In addition, one official noted that if an agency’s fleet involves a variety of different alternative fuel sources—such as E-85, electric vehicles, and compressed natural gas—the agency would need to incur the cost of developing and maintaining infrastructure that supports each type of fuel source if it is not commercially available. We previously reported similar findings related to the costs of alternative fuel requirements. Specifically, in 2013, we reported that some agency officials found commercial vendors to be reluctant to install alternative fuel tanks when the return on investment was not promising. In addition, we noted that agencies found it difficult to meet certain energy requirements in a constrained budget environment due to the potential for related additional costs. Officials from the three selected agencies also reported that administrative tasks associated with meeting AFV requirements can be difficult and resource-intensive. For instance, these officials reported that it can be time-consuming for fleet managers to ensure that drivers of dual-fueled vehicles, which can run on gasoline or an alternative fuel, use the required alternative fuels when reasonably priced and available, which is defined as within a 5 mile or 15 minute drive. Similarly, agency officials reported that completing paperwork associated with the AFV requirements can take up costly staff time. For example, one official stated that an agency’s mission may require the use of certain vehicle types—such as pickup trucks— that do not meet the Energy Independence and Security Act of 2007’s requirement to purchase only low-greenhouse-gas-emitting vehicles, resulting in the need for an agency to certify that no low-greenhouse-gas-emitting vehicle is available to meet the functional needs of the agency. The official explained that it was very time-consuming for his staff to complete the required paperwork for each vehicle in the fleet that was not a low-greenhouse-gas-emitting vehicle. Officials from the three selected agencies reported that limitations of their fleet management information systems—including manual data entry and recording fleet data in multiple systems—can lead to increased costs in the form of staff time and missed opportunities to analyze the available information. Officials also reported that efforts to address these limitations by adopting more sophisticated technology typically involve obstacles such as the complications associated with cybersecurity. Officials for each selected agency said that their systems require the manual entry of some data, which is resource intensive. For example, one official said users needed to complete some forms by hand before the data were manually entered into the system. Officials also said users sometimes manually enter data in non-standard formats, which can increase the amount of time needed to analyze the data. In addition, according to agency officials, when fleet data are recorded in multiple systems that do not communicate with each other, it requires more staff time to accomplish fleet management tasks. One official stated that in some cases, agencies within the same department may not use the same data systems, a situation that complicates internal processes such as transferring a vehicle from one agency to another. Several officials said that they are interested in adopting new technologies that have the potential to streamline the collection and improve the accuracy of fleet data, such as using scanners to collect data from bar codes. However, officials cautioned that it can be costly and time-consuming to adopt new technologies and integrate them into existing systems and processes. For example, an official within Navy said that his office paid a local vendor to run diagnostics on their vehicles’ engines —a costly process. In an attempt to minimize these costs, his office identified an available software program that could perform these diagnostics at a potential savings of $200–300 per test. The official said that as of October 2016 the Navy was 8 months into the process of evaluating the program for potential use. Similarly, officials from Navy reported challenges in adopting telematics. Specifically, according to Navy officials, telematics have been installed in approximately 700 owned vehicles in Navy’s fleet, at an estimated cost of approximately $300,000; however, they have not yet been able to activate 674 of those 700 installed telematics systems due to cybersecurity reviews to ensure that the vehicles cannot be manipulated by outside sources. Navy officials said that while telematics offer information that benefits some vehicles and missions, installing telematics in their vehicle fleets required a substantial initial investment. ` Some officials said that when budgets are tight they retain vehicles that need to be replaced beyond the standard replacement timeframe of three years or 60,000 miles for sedans and station wagons, which can lead to higher overall fleet costs. Officials explained that maintenance costs increase as vehicles age, so the overall lifecycle costs of owning older vehicles is higher. According to the fiscal year 2015 Federal Fleet Report, the average age of owned passenger vehicles was 5.8 years for DHS, 5.4 years for USDA, and 5.2 years for Navy. According to officials, some agencies perform vehicle lifecycle cost analyses which help determine when vehicles are no longer cost effective to retain, among other decisions. However, one official said they were not always able to replace vehicles when their analyses suggested the vehicle should be replaced due to budget constraints. Moreover, by using more of their fleet funds on maintenance, agencies have even fewer available resources to purchase new vehicles. Officials said that this cycle can make it challenging for fleet managers to contain costs. According to officials, to help address the costs of aging owned vehicles, Navy and NRCS are planning to increase their use of GSA-leased vehicles. An official from CBP reported that converting their fleet from owned to leased vehicles was generally not financially feasible because CBP installs mission-specific special systems and equipment and operates in conditions that would be likely to damage vehicles. Given the billions of dollars spent annually to operate and maintain federally owned vehicles and the government-wide emphasis on efficient fleet management, it is critical for agencies to have sound fleet management practices. The three departments in our review have established policies to appropriately use vehicles and facilitate the removal of unnecessary vehicles. Nonetheless, our finding that millions of dollars are being spent on vehicles that are potentially underutilized indicates action is needed. Because CBP does not view DHS criteria to be appropriate in all cases for its diverse fleet and does not individually justify vehicles, CBP management cannot determine which vehicles, if any, are being properly utilized by CBP staff, and the agency may be spending millions of dollars on vehicles that are not needed. While new telematics devices are capable of providing vehicle usage information, CBP risks losing the opportunity to use these new data to identify and remove underutilized vehicles because CBP has not developed a plan to determine how it will use this usage data to improve its utilization assessment processes, including processes for vehicles without telematics devices. Additionally, because key NRCS and USDA officials were unaware of USDA’s policies on utilization and assessment, staff did not have necessary information to guide retention decisions for vehicles that cost more than $750,000 annually. Awareness of this policy would provide important information to NRCS officials to determine which vehicles, if any, may be underutilized and could be removed from their fleet. It is important for departments to ensure that all of their fleet management staff, including those in each agency within a department, are aware of and comply with departmental policies. To facilitate the removal of underutilized vehicles, we recommend that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection to develop a written plan for how CBP will use newly available usage data to improve its utilization assessment processes. Such a plan would define utilization criteria that reflect CBP’s mission and describe how CBP will review and individually justify vehicles that do not meet the utilization criteria established by either DHS or CBP. To enhance awareness of NRCS’s utilization assessment process and facilitate the elimination of unnecessary vehicles, we recommend that the Secretary of the Department of Agriculture communicate USDA’s policy on vehicle utilization to USDA’s fleet management staff to ensure staff are aware of USDA policy. This communication could include redistributing the 2012 utilization policy memo. We provided a draft of this report to the departments of Agriculture, Commerce, Defense, Energy, Homeland Security, Interior, Justice, and Veterans Affairs and to GSA for review and comment. The Departments of Defense, Energy, Justice, and Veterans Affairs did not have comments. The Departments of Agriculture, Commerce, Homeland Security, and the Interior, as well as GSA, provided technical comments which we incorporated as appropriate. In its written comments, reproduced in appendix V, DHS stated that it concurred with our recommendation. A Program Analyst in the Office of the Chief Financial Officer provided emailed comments on behalf of USDA’s Office of Procurement and Property Management. In these emailed comments, USDA did not agree or disagree with our findings, but noted that the Department will address the recommendation. We are sending copies of this report to interested congressional committees, the Secretaries of the departments of Agriculture, Commerce, Defense, Energy, Homeland Security, Interior, Justice, and Veterans Affairs, and the Administrator of GSA. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report covers: (1) the types and locations of vehicles recently purchased across the federal government and the associated costs; (2) the extent to which selected agencies determine what vehicles are utilized; and (3) any challenges that selected agencies face in managing the costs of their owned vehicle fleets. To determine the types, locations, and costs of vehicles recently purchased across the federal government, we analyzed purchase data from the General Services Administration’s (GSA) ROADS database, the primary repository for information on vehicle purchase transactions. Specifically, we analyzed information on more than 64,500 passenger vehicles and light trucks purchased by federal agencies through GSA from fiscal year 2011 through fiscal year 2015 (the most recent, complete fiscal year at the time of the request), including information such as: the quantity of vehicles purchased; the type of vehicle, such as a pickup truck or sedan; the agencies that acquired vehicles; the cost of the vehicles purchased; and the location of where the purchased vehicles were delivered. We also used the ROADS data to analyze the options that agencies purchased with these vehicles, as options provide additional insight into the characteristics of the vehicles purchased. Agencies selected 346 unique options for the passenger vehicles and light trucks purchased during this 5-year time period. In order to describe the purpose of these options, we categorized the options into six groups. The first four groups were derived from the Federal Property Management Regulations (FPMR): 1. Safety: We defined this category as options that serve to prevent collisions, vehicle or other damage, injury, or theft of the vehicle. This includes protecting occupants in the event of a collision and supporting the recovery of a vehicle if it is stolen. Lane-departure warning systems and backup cameras are examples of options we placed in the safety category. 2. Economy: We defined this category as options that remove features, or options that reduce the size, power, or features to be included in a vehicle. Reducing the engine size or eliminating air conditioning, thus potentially reducing vehicle’s cost, are examples of options we placed in the economy category. 3. Efficiency: We defined this category as options that could increase the fuel efficiency of a vehicle (including supporting use of alternative fuels) as well as options that facilitate vehicle maintenance and optimal operation (including reducing operating costs). An engine capable of running on compressed natural gas is an example of an option we placed in the efficiency category. 4. Suitability: We defined this category as options that could reasonably be determined to be necessary to accomplish an agency’s mission. Pre-wired police equipment is an example of an option we placed in the suitability category. In addition, we created two additional categories: 5. Administration: We defined this category as including options that were related to the purchase, delivery, or warranty of the vehicle (such as the option to send a vehicle overseas). 6. Undetermined: We used this category to group options that did not meet any of the five defined categories, including options with unclear descriptions. We then conducted a content analysis to categorize all 346 options into each of these categories. In order to conduct this analysis, two analysts independently coded each of the options into the six categories, and then met to discuss and resolve any coding discrepancies. After this initial categorization, 48 options were categorized as “undetermined”. We subsequently discussed our categorizations with GSA officials and asked them for any examples of why agencies might select these undetermined options for any of the other five categories. After reviewing GSA’s responses, we placed 38 of the previously undetermined options into one of the five other categories. The remaining 10 options were still categorized as “undetermined” because GSA’s response did not provide assurance that the option belonged in one of the other five categories. For example, GSA replied that power seats could be selected for drivers with mobility impairments, but we found that agencies selected power seats over 5,000 times. Similarly, GSA suggested that remote keyless entry could be a safety feature for vehicles operated in extreme climates, but it is unclear what climates would constitute “extreme”. When we randomly selected zip codes to determine where 10 of the vehicles with remote keyless entry were delivered, we found that they were delivered to Maryland, North Carolina, and Virginia, among other locations. We subsequently reported the cost and frequency of these remaining options, combining some similar options (i.e., leather seats and heated front leather seats), which resulted in the six options we analyzed. We also analyzed information on the number of waivers that agencies submitted to GSA during this time frame in order to purchase vehicles from a non-GSA source. We examined all 17 waivers that GSA approved from fiscal year 2013 through fiscal year 2015 for an executive agency to either purchase one vehicle or to purchase executive vehicles. For each of these waivers, we requested purchase orders and other relevant information from the agencies that received these waivers to determine what vehicles were purchased. To determine how selected federal agencies identify what vehicles are utilized, we judgmentally selected three federal agencies for review: the U.S. Navy (Navy); U.S. Department of Agriculture’s Natural Resources Conservation Service (NRCS); and U.S. Department of Homeland Security’s Customs and Border Protection (CBP). We made our selection after considering the following criteria about their respective departments: among the largest owned fleets in fiscal year 2015, at least 10,000 non law enforcement designated vehicles, and a majority of domestic vehicles in their overall fleet. We then selected the agencies from these three selected departments that reported among the largest number of domestic, non-law enforcement passenger vehicles and light trucks in response to a GAO request for this information. We selected these fleets to broadly discuss the experiences and practices across a section of the federal fleet. These results are not generalizable to their overarching departments or other federal agencies. We reviewed the selected agencies’ policies on utilization and interviewed officials. To estimate the costs associated with potentially underutilized vehicles, we conducted a multi-step analytical process. First, we focused on a selected population of vehicles, which included: light trucks or passenger vehicles, because these two categories comprise the majority of federal-owned vehicle fleets (approximately 55 percent and 15 percent respectively); vehicles that are still in the selected agencies’ inventories as of November 2016; and vehicles that were acquired prior to fiscal year 2015, so that the agencies were fully accountable for the selected vehicles’ utilization over the entire fiscal year 2015 time period. We defined passenger vehicles and light trucks using vehicle descriptions in GSA’s Federal Automotive Statistical Tool (FAST) database, as shown in table 8. We excluded tactical, law-enforcement, and emergency-responder vehicles from the selected vehicle population as well as vehicles located outside of the continental U.S., due to the differences in reporting and management processes that can be associated with these characteristics. We also excluded vehicles that were procured through non-appropriated funds, such as user fees, as any savings associated with eliminating those vehicles would not accrue to the federal government. We then requested data from the three selected agencies on all of the relevant vehicles in our defined population. After receiving the data from the selected agencies, we conducted various diagnostic analyses of these data to assess their reliability and performed logic procedures to address obvious data issues. For example, we examined VIN numbers and acquisition cost values and removed any vehicles with errors from the population of analysis, removing approximately 12,900 vehicles in total across the three agencies. We received over 25,000 vehicle records from the three agencies that we reviewed for reliability and data issues, and 12,175 vehicle records met our selection criteria for analysis. In total, the selected vehicles from these agencies accounted for about 3 percent of the federally owned fleet. The findings from our analysis of these vehicle records are not generalizable to all vehicles at the agencies or to agencies beyond those we selected. Next, we sent to each selected agency a list of their selected vehicles and requested that they group the vehicles into one of the categories described below and depicted in figure 5, so that we could determine how many vehicles were utilized, underutilized, or of unknown utilization. Specifically, groups 3, 5, and 7 reflect shortcomings in agency efforts to identify and remove underutilized owned vehicles. We focused on determining the costs associated with the vehicles in these groups. Group 1: Vehicle must be a law enforcement, emergency response, or tactical vehicle or located outside of the United States or no longer in the agency’s owned inventory. These vehicles are excluded from our population of analysis. Group 2: Vehicle must have met at least one agency utilization criteria, specifically defined in the agency’s policy or guidance documents in fiscal year 2015. This group did not apply to agencies that do not have utilization criteria written in their policy. Group 3: The agency must have specific utilization criteria defined in agency policy or guidance documents AND be unable to determine whether the vehicle met this standard in fiscal year 2015. Example: The agency has mileage-based utilization criteria, but the mileage data are either missing or clearly incorrect (negative mileage). Group 4: For this category, for the vehicle’s retention, there must be a written justification that was considered valid by the agency in fiscal year 2015 in lieu of meeting criteria defined in agency policy or guidance. If the agency does not have utilization criteria, then the vehicles would need to have written justifications in regards to vehicle retention in order to be placed in this category. Group 5: The agency is unable to determine whether there is a written record verifying that in fiscal year 2015, the vehicle’s continued use was justified and approved. Group 6: Vehicle must NOT have met any agency utilization criteria for fiscal year 2015 (or there are no criteria in agency policy) that was specifically defined in agency policy or guidance documents, AND there was no written justification for the vehicle’s retention that was considered valid by the agency in fiscal year 2015. AND the vehicle was reassigned, repurposed, or given other tasks within the agency in fiscal years 2015 or 2016. Group 7: Vehicle must NOT have met, for fiscal year 2015, any agency utilization criteria (or there are no criteria in agency policy) that was specifically defined in agency policy or guidance documents, AND there was no written justification for the vehicle’s retention that was considered valid by the agency in fiscal year 2015. AND the vehicle was not reassigned, repurposed, or given other tasks within the agency in fiscal years 2015 or 2016. Agencies were responsible for categorizing each of the vehicles. We provided the agencies with each vehicle’s VIN number, make, model, and other identifying information to assist in the process. We did not verify whether agencies categorized vehicles correctly as some of the information necessary for these categorizations was contained within agency systems and records (for example, if the vehicle met an agency’s defined criteria or if the vehicle was repurposed). However, to evaluate the overall reliability of agencies’ vehicle justification reporting, we selected a random sample of 20 vehicles from each agency that placed vehicles into group 4 and then requested the written justifications for those vehicles. To determine the cost savings that could be achieved through the reduction of potentially underutilized vehicles (groups 3, 5, and 7), we first determined what factors drove cost for the selected agencies in managing their owned vehicles. After conducting agency interviews, we determined that the main drivers of cost for agencies were depreciation, maintenance, and fuel. While there are other drivers of cost, agency officials reported that they did not collect information on the indirect costs associated with owned vehicle fleets such as fleet manager salaries or the costs to garage the vehicles. Although fuel is a main driver of cost, any reduction in fuel costs by removing these underutilized owned vehicles would most likely be offset to a large extent by an increase in fuel costs for other vehicles in the agency’s fleet in order to complete the agency’s mission. Thus, we determined that potential cost savings of underutilized vehicles would be achieved through determining the aggregate depreciation—which represents foregone cost avoidance— and maintenance costs for fiscal year 2015. Agencies used different methods to calculate depreciation, so we used GSA’s simplified straight-line depreciation method to calculate a consistent average annual depreciation cost per vehicle for each agency. We asked each agency to provide the average capitalized value, average salvage value, and average useful life (in years) for vehicles in their respective fleets. We then used these values to calculate average annual depreciation per vehicle and multiplied that cost to the number of vehicles that were potentially underutilized. This calculation represented the total depreciation of all potentially underutilized vehicles in each agency’s fleet for fiscal year 2015. However, because vehicles typically have greater depreciation during their first few years in operation, the straight-line depreciation method underestimates the actual loss of value for relatively new vehicles but overestimates the actual loss for vehicles nearing the end of their useful lives. The actual total cost savings—in the form of avoiding loss of value—from removing vehicles is difficult to estimate because it depends on many factors specific to each individual vehicle such as age and model, and economic factors such as the fluid market value for used vehicles. To calculate the total maintenance cost for potentially underutilized vehicles, we asked the agencies to provide all maintenance transaction records by vehicle for fiscal year 2015. While NRCS and Navy could provide this information, CBP could only provide an unknown percentage of their total fleet maintenance transactions. This response is due to the fact that 60 percent of CBP’s fleet has access to agency-owned maintenance garages. CBP does not record asset-level maintenance transactions for their agency-owned maintenance garages. To address this issue, we asked CBP to provide the total maintenance cost incurred to the agency in fiscal year 2015 as well as the total number of vehicles in its fleet for fiscal year 2015. After CBP provided this information, we were able to calculate its average vehicle maintenance cost for fiscal year 2015, and we multiplied this average to the number of potentially underutilized vehicles in its fleet. Although the average per-vehicle maintenance costs are substantively lower for NRCS vehicles (groups 5 and 7) than those of CBP, high maintenance costs are consistent with CBP officials’ statements that their vehicles drive in difficult terrain. Given the potential for harsh driving conditions, we determined that their calculated total costs are reliable for reporting purposes. To gather information related to challenges agencies face in managing the costs of their owned vehicle fleets, we spoke with fleet management officials from our three selected agencies: Navy, CBP, and NRCS. We conducted two discussion groups with fleet management officials in October 2016. We recruited participants by requesting volunteers from each agency’s fleet management pool, and then judgmentally selected the eventual participants in order to achieve a balance of representation among all three of the selected agencies. A total of 17 fleet management officials participated in our discussions, with at least one representative from each agency in each group. We also conducted individual interviews with upper management officials from each of the three selected agencies to discuss the challenges agencies face in managing the costs of their fleets. Findings from our discussion groups and interviews, while illustrative, are not generalizable to the full population of fleet management officials at these three agencies or the federal government. We conducted this performance audit from March 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Customs and Border Protection (CBP), Natural Resources Conservation Service (NRCS), and Navy have various processes to add an additional vehicle to their fleet or to replace an existing vehicle, specifically: Department of Homeland Security (DHS) fleet policy requires agency fleet managers to submit an annual Fleet Management Plan to the DHS Fleet Manager. According to DHS officials, upon approval by DHS, agencies may purchase vehicles based on available funding, up to the maximum number of vehicles approved. When ordering a replacement for an existing vehicle, CBP officials are to its established vehicle replacement criteria (which include mileage and age minimums) and, according to officials, ensure at least one of these standards are met before replacing a vehicle. When ordering a vehicle, NRCS officials are to determine the appropriate type of vehicle to purchase within each state and request vehicle replacements or new acquisitions from a centralized office, called Personal Property Services (PPS). PPS is responsible for purchasing all vehicles once the projected acquisitions are approved by USDA headquarters. According to Navy officials, Navy is subject to Department of Defense’s purchasing guidelines as well as its own policies. To purchase a new vehicle, users must submit a requirement form with a written justification for the vehicle. The justification may include descriptions such as vehicle type, required mileage, and anticipated number of times the vehicle will be used in a day. To order replacement vehicles, users must assess and justify the continued need for the vehicle to be replaced. Transport Truck (Mack) Heavy Equipment Transport Truck (CT660S) Truck (Ford F650 Crew Cab XLT 4x4) Brush Fire Truck (Ford F550 Crew Cab 4x4 Truck, with bed conversion) Heavy Haul Tractor (T-30) SUV (Chevrolet Suburban) Passenger Van (Chevrolet Express 2500) Ambulance (PL Custom Classic 170 Dodge 4500 4WD) (2) SUVs (Chevrolet 1500 Suburbans) Dump Truck (Hook Lift with Flat Bed and 8 Yard Dump Box) (2) Minivans (Dodge Grand Caravans) Ambulance (Ford E450) Truck (Toyota Tacoma, 4WD) Dump Truck (Mack Model GU713) Passenger Bus (Turtle Top Odyssey XLT with Freightliner M2) SUV (Ford Explorer Limited) In addition to the contact named above, John W. Shumann (Assistant Director), Alison Snyder (Analyst-in-Charge), Margaret Hettinger, Terence Lam, Jerome Sandau, Candace Silva-Martin, Michelle Weathers, Crystal Wesco, and Elizabeth Wood made key contributions to this report. | Federal agencies spent about $3.4 billion in fiscal year 2015 to keep and operate almost 450,000 federally owned vehicles. Each federal agency is responsible for determining utilization criteria and assessing vehicle utilization. GAO was asked to describe federally owned vehicles and examine federal processes for assessing their utilization. This report, among other objectives: (1) describes recently purchased vehicles, and (2) assesses selected agencies' efforts to determine if vehicles are utilized. GAO analyzed government-wide data on approximately 64,500 light trucks and passenger vehicles purchased through GSA from fiscal years 2011 through 2015, the most recent available. To assess utilization efforts, GAO selected three agencies (using factors such as fleet size), and reviewed agency utilization information on over 12,000 owned vehicles from fiscal year 2015. GAO also interviewed federal officials. These findings are not generalizable to all agencies but provide insight into the practices of agencies that procure thousands of vehicles. Federal agencies spent more than $1.6 billion to purchase approximately 64,500 passenger vehicles and light trucks through the General Services Administration (GSA) from fiscal years 2011 through 2015. Five departments—Defense (DOD), Homeland Security (DHS), Agriculture (USDA), Justice, and Interior—purchased 90 percent of these vehicles, and spent a comparable percentage of the associated funds. The vehicles cost an average of approximately $25,600 each. GAO determined that the three agencies reviewed—Navy within DOD, Customs and Border Protection (CBP) within DHS, and Natural Resources Conservation Service (NRCS) within USDA—varied in efforts to determine if vehicles were utilized in fiscal year 2015. Navy determined that all of the 3,652 vehicles GAO selected for review were utilized by applying DOD and Navy criteria such as for mileage and individually justifying vehicles. CBP did not determine if 1,862 (81 percent) of its 2,300 selected vehicles were utilized in fiscal year 2015 even though the vehicles did not meet DHS's minimum mileage criteria. CBP officials stated that, contrary to DHS policy, CBP did not have criteria to measure these vehicles' utilization because it was difficult to manually collect the data needed to establish appropriate criteria and assess if vehicles met those criteria. CBP is currently installing devices in many of its vehicles that will allow it to more easily collect such data, but lacks a specific plan for how to ensure these data will allow it to determine if vehicles are utilized. NRCS did not determine if 579 (9 percent) of its 6,223 selected vehicles were utilized in fiscal year 2015. USDA and NRCS fleet officials stated that the agency did not annually assess vehicle utilization, nor did it apply USDA criteria such as mileage or days used. USDA and NRCS officials said they were unaware of USDA's policy requiring these steps because the policy had not been widely discussed or shared within USDA since 2012. CBP and NRCS cumulatively incurred an estimated $13.5 million in depreciation and maintenance costs in fiscal year 2015 for vehicles with unknown utilization (see table). While these costs may not equal the cost savings agencies derive from eliminating underutilized vehicles, without corrective action, agencies are incurring expenses to retain vehicles without determining if they are utilized. a Selected owned vehicles for each agency in GAO's review covered all passenger vehicles and light trucks, except those that were: 1) emergency responder vehicles, 2) law enforcement vehicles, 3) tactical vehicles, or 4) located outside the Continental United States, among other limited exclusions. GAO recommends that CBP develop a plan for how it will use its new data collection devices to establish criteria and assess vehicle utilization and that USDA communicate its vehicle utilization policy to fleet officials. DHS and USDA plan to implement these recommendations. |
DHS has made progress in implementing its acquisition, information technology, financial, and human capital management functions, but continues to face obstacles and weaknesses in these functions that could hinder the department’s transformation and implementation efforts. For example, DHS has faced challenges in implementing acquisition management controls, a consolidated financial management system, and a strategic human capital plan, among other things. As DHS continues to mature as an organization, it will be important that the department continue to work to strengthen its management functions since the effectiveness of these functions affects its ability to fulfill its homeland security and other missions. Acquisition management. While DHS has made recent progress in clarifying acquisition oversight processes, it continues to face obstacles in managing its acquisitions and ensuring proper implementation and departmentwide coordination. We previously reported that DHS faced challenges in acquisition management related to acquisition oversight, cost growth, and schedule delays. In June 2010, we reported that DHS continued to develop its acquisition oversight function and had begun to implement a revised acquisition management directive that includes more detailed guidance for programs to use when informing component and departmental decision making. We also reported that the senior-level Acquisition Review Board had begun to meet more frequently and provided programs decision memorandums with action items to improve performance. However, while the Acquisition Review Board reviewed 24 major acquisition programs in fiscal years 2008 and 2009, more than 40 major acquisition programs had not been reviewed, and programs had not consistently implemented review action items identified as part of the review by established deadlines. DHS acquisition oversight officials raised concerns about the accuracy of cost estimates for some of its major programs, making it difficult to assess the significance of the cost growth we identified. In addition, over half of the programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, and establishing acquisition program baselines. Programs also experienced other acquisition planning challenges, such as staffing shortages and lack of sustainment. For example, we reported that the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) did not sufficiently define what capabilities and benefits would be delivered, by when, and at what cost, which contributed to development and deployment delays. In addition, we reported that three Coast Guard programs we reviewed—Maritime Patrol Aircraft, Response Boat-Medium, and Sentinel—reported placing orders for or receiving significant numbers of units prior to completing testing to demonstrate that what the programs were buying met Coast Guard needs. Our prior work has found that resolution of problems discovered during testing can sometimes require costly redesign or rework. We have made a number of recommendations to DHS to strengthen its acquisition management functions, such as (1) reinstating the Joint Requirements Council —the department’s requirements review body—or establishing another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort; (2) ensuring that budget decisions are informed by the results of investment reviews; (3) identifying and aligning sufficient management resources to implement oversight reviews throughout the investment life cycle; and (4) ensuring major investments comply with established component and departmental review policy standards. DHS generally concurred with these recommendations and reported taking action to begin to address some of them, including developing the Next Generation Periodic Reporting System to capture and track key program information, and monitoring cost and schedule performance, contract awards and program risks. Based on our work on DHS’s acquisition management, we have identified specific actions and outcomes that we believe the department needs to achieve to address its acquisition management challenges. We believe that these actions and outcomes are critical to addressing the underlying root causes that have resulted in the high-risk designation. In particular, DHS should demonstrate and sustain effective execution of a knowledge-based acquisition process for new and legacy acquisition programs by, among other things, (1) validating required acquisition documents in a timely manner at each major milestone; (2) establishing and operating a Joint Requirements Council, or a similar body, to review and validate acquisition programs’ requirements; (3) ensuring sufficient numbers of trained acquisition personnel at the department and component levels; and (4) establishing and demonstrating measurable progress in achieving goals that improve acquisition programs’ compliance with departmental policies. Information technology management. DHS has undertaken efforts to establish information technology management controls and capabilities, but in September 2009 we reported that DHS had made uneven progress in its information technology management efforts to institutionalize a framework of interrelated management controls and capabilities. For example, DHS had continued to issue annual updates to its enterprise architecture that added previously missing scope and depth, and further improvements were planned to incorporate the level of content, referred to as segment architectures, needed to effectively introduce new systems and modify existing ones. Also, we reported that DHS had redefined its acquisition and investment management policies, practices, and structures, including establishing a system life cycle management methodology, and it had increased its acquisition workforce. Nevertheless, challenges remain relative to, for example, implementing the department’s plan for strengthening its information technology human capital and fully defining key system investment and acquisition management policies and procedures for information technology. Moreover, the extent to which DHS had actually implemented these investment and acquisition management policies and practices on major information technology programs had been inconsistent. For example, our work showed that major information technology acquisition programs had not been subjected to executive-level acquisition and investment management reviews. As a result, we reported that major information technology programs aimed at delivering important mission capabilities, such as the Rescue 21 search and rescue system and the Secure Border Initiative Network (SBInet) virtual border fence, had not lived up to their capability, benefit, cost, and schedule expectations because of, for example, deficiencies in development and testing, and lack of risk management processes and key practices for developing reliable cost and schedule estimates. We have made a range of recommendations to strengthen DHS information technology management, such as establishing procedures for implementing project-specific investment management policies, and policies and procedures for portfolio-based investment management. We reported that while DHS and its components have made progress, more needs to be done before DHS can ensure that all system acquisitions are managed with the necessary rigor and discipline. Based on our work, we have identified actions and outcomes that we believe would help the department address challenges in information technology management that have contributed to our designation of DHS implementation and transformation as high risk. For example, DHS should, among other things, demonstrate measurable progress in implementing its information technology human capital plan and accomplishing defined outcomes, including ensuring that each system acquisition program office is sufficiently staffed. DHS should also establish and implement information technology investment management practices that have been independently assessed as having satisfied the capabilities associated with stage three of our Information Technology Investment Management Framework. In addition, the department should establish enhanced security of the department’s internal information technology systems and networks. Financial management. DHS has made progress in addressing its financial management and internal controls weaknesses, but has not yet addressed all of them or developed a consolidated departmentwide financial management system. Since its establishment, DHS has been unable to obtain an unqualified audit opinion on its financial statements (i.e., prepare a set of financial statements that are considered reliable). For fiscal year 2009, the independent auditor issued a disclaimer on DHS’s financial statements and identified eight deficiencies in DHS’s internal control over financial reporting, six of which were so significant that they qualified as material weaknesses. Until these weaknesses are resolved, DHS will not be in position to provide reliable, timely, and useful financial data to support day-to-day decision making. DHS has taken steps to prepare and implement corrective action plans for its internal control weaknesses through the Internal Control Playbook, DHS’s annual plan to design and implement departmentwide internal controls. In addition, in June 2007 and December 2009 we reported on DHS’s progress in developing a consolidated financial management system, called the Transformation and Systems Consolidation (TASC) program, and made a number of recommendations to help DHS address challenges affecting the departmentwide financial management integration. In June 2007, we reported that DHS had made limited progress in integrating its existing financial management systems, and we made six recommendations focused on the need for DHS to define a departmentwide strategy and embrace disciplined processes necessary to properly manage the specific projects. We followed up on these recommendation in our December 2009 report and found that DHS had begun to take actions to implement four of our six 2007 recommendations but had not yet fully implemented any of them. Specifically, DHS had made progress in (1) defining its financial management strategy and plan, (2) developing a comprehensive concept of operations, (3) incorporating disciplined processes, and (4) implementing key human capital practices and plans for such a systems implementation effort. However, DHS had not yet taken the necessary actions to standardize and reengineer business processes across the department, including applicable internal controls, and to develop detailed consolidation and migration plans. While some of the details of the department’s standardization of business processes and migration plans depend on the selected new financial management system, DHS would benefit from performing a gap analysis and identifying all of its affected current business processes so that DHS can analyze how closely the proposed system will meet the department’s needs. In addition, we reported that DHS’s reliance on contractors to define and implement the new financial management system, without the necessary oversight mechanisms to ensure that the processes were properly defined and effectively implemented, could result in system efforts plagued with serious performance and management problems. We reported that these issues placed DHS at risk for implementing a financial management system that does not meet cost, schedule, and performance goals. We recommended that DHS establish contractor oversight mechanisms to monitor the TASC program; expedite the completion of the development of the TASC financial management strategy and plan so that the department is well positioned to move forward with an integrated solution; and develop a human capital plan for the TASC program that identifies needed skills for the acquisition and implementation of the new system. DHS agreed with our recommendations and described actions it had taken and planned to take to address them, noting, for example, the importance of being vigilant in its oversight of the program. Based on our work on DHS’s financial management we have identified specific actions and outcomes that we believe the department needs to address to resolve its financial management challenges. Among other things, DHS should develop and implement a corrective action plan with specific milestones and accountable officials to address the weaknesses in systems, internal control, and business processes that impede the department’s ability to integrate and transform its financial management. DHS should also sustain clean opinions on its departmentwide financial statements, adhere to financial system requirements in accordance with the Federal Financial Management Improvement Act of 1996, and have independent auditors report annually on compliance with the act. In addition, DHS should establish contractor oversight mechanisms to monitor the contractor selected to implement TASC and successfully deploy TASC to the majority of DHS’s components, such as the Coast Guard, the Federal Emergency Management Agency, and the Transportation Security Administration. Human capital management. DHS has issued various strategies and plans for its human capital activities and functions, such as a human capital strategic plan for fiscal years 2009-2013 that identifies four strategic goals for the department related to talent acquisition and retention; diversity; employee learning and development; and policies, programs, and practices. DHS is planning to issue an updated strategic human capital plan in the coming months. While these initiatives are promising, DHS has faced challenges in implementing its human capital functions. For example, our prior work suggests that successful organizations empower and involve their employees to gain insights about operations from a frontline perspective, increase their understanding and acceptance of organizational goals and objectives, and improve motivation and morale. DHS’s scores on the 2008 Office of Personnel Management’s Federal Human Capital Survey—a tool that measures employees’ perceptions of whether and to what extent conditions characterizing successful organizations are present in their agency—and the Partnership for Public Service’s 2010 rankings of the Best Places to Work in the Federal Government improved from prior years. However, in the 2008 survey, DHS’s percentage of positive responses was 52 percent for the leadership and knowledge management index, 46 percent for the results-oriented performance culture index, 53 percent for the talent management index, and 63 percent for the job satisfaction index. In addition, in 2010, DHS was ranked 28 out of 32 agencies in the Best Places to Work ranking on overall scores for employee satisfaction and commitment. In addition, our prior work has identified several workforce barriers to achieving equal employment opportunities and the identification of foreign language needs and capabilities at DHS. In August 2009 we reported that DHS had developed a diversity council, among other initiatives, but that DHS had generally relied on workforce data and had not regularly included employee input from available sources to identify triggers to barriers to equal employment opportunities, such as promotion and separation rates. We also reported that, according to DHS, it had created planned activities to address these barriers, but modified target completion dates by up to 21 months and had not completed any planned activities due to staffing shortages. In June 2010 we reported on DHS’s foreign language capabilities, noting that DHS has taken limited actions to assess its foreign language needs and existing capabilities and to identify potential shortfalls. Assessing hiring needs is crucial in achieving a range of component and departmentwide missions. As just one example, employees with documented proficiency in a variety of languages can contribute to U.S. Immigration and Customs Enforcement’s intelligence and direct law enforcement operations, but staff with these capabilities are not systematically identified. We have made several recommendations to help DHS address weaknesses concerning equal employment opportunity and assessments of foreign language needs and capabilities within human capital management. For example, we recommended that DHS identify timelines and critical phases along with interim milestones as well as incorporate employee input in identifying potential barriers to equal employment opportunities. DHS concurred with our recommendations and reported taking action to address them, such as revising plans to identify steps and milestones for departmental activities to address barriers to equal employment opportunities, and developing a strategy for obtaining departmentwide employee input. We also recommended that DHS comprehensively assess its foreign language needs and capabilities and identify potential shortfalls. DHS concurred with our recommendations and reported taking actions to address them, such as developing a task force consisting of DHS components and offices that have language needs in order to identify requirements and assess the necessary skills. Based on our work on human capital management at the department, we have identified various actions and outcomes for DHS to achieve to address those human capital management challenges that have contributed to our designation of DHS implementation and transformation as high risk. The department should, among other things, develop and implement a results-oriented strategic human capital plan that identifies the department’s goals, objectives, and performance measures for strategic human capital management and that is linked to the department’s overall strategic plan. DHS also needs to link workforce planning efforts to strategic and program-specific planning efforts to identify current and future human capital needs, and improve DHS’s scores on the Federal Employee Viewpoint Survey. In addition, DHS should develop and implement mechanisms to assess and provide opportunities for employee education and training, and develop and implement a recruiting and hiring strategy that is targeted to fill specific needs. DHS has taken actions to integrate its management functions and to strengthen its performance measures to assess progress in implementing these functions, but the department has faced challenges in these efforts. We have reported that while it is important that DHS continue to work to implement and strengthen its management functions, it is equally important that DHS address management integration and performance measurement from a comprehensive, departmentwide perspective to help ensure that the department has the structure, processes, and accountability mechanisms in place to effectively monitor the progress made to address the threats and vulnerabilities that face the nation. Management integration and performance measurement are critical to the successful implementation and transformation of the department. Management integration. DHS has put in place common policies, procedures, and systems within individual management functions, such as human capital, that help to vertically integrate its component agencies. However, DHS has placed less emphasis on integrating horizontally, and bringing together its management functions across the department through consolidated management processes and systems. In November 2009, we reported that DHS had not yet developed a strategy for management integration as required by the 9/11 Commission Act and with the characteristics we recommended in our 2005 report. Specifically, we recommended that the strategy (1) look across the initiatives within each of the management functional units, (2) clearly identify the critical links that must occur among these initiatives, (3) identify tradeoffs and set priorities, (4) set implementation goals and a time line to monitor the progress of these initiatives to ensure the necessary links occur when needed, and (5) identify potential efficiencies, and ensure that they are achieved. In the absence of a management integration strategy, DHS officials stated that documents such as management directives and strategic plans addressed aspects of a management integration strategy and could help the department to manage its integration efforts. However, we reported that without a documented management integration strategy, it was difficult for DHS, Congress, and other key stakeholders to understand and monitor the critical linkages and prioritization among these various efforts. We also reported that while DHS increased the number of performance measures for its Management Directorate, it had not yet established measures for assessing management integration across the department. We reported that without these measures DHS could not assess its progress in implementing and achieving management integration. We recommended that once a management integration strategy was developed, DHS establish performance measures for assessing management integration. DHS stated that the department was taking actions to address our recommendation. Since our November 2009 report, DHS has taken action to develop a management integration strategy. Specifically, DHS developed and provided us with an initial management integration plan in February 2010. The initial plan identified seven priority initiatives for achieving management integration: Enterprise governance. A governance model that would allow DHS to implement mechanisms for integrated management of DHS programs as parts of broader portfolios of related activities. Balanced workforce strategy. Workforce planning efforts to identify the proper balance of federal employees and private labor resources to achieve the department’s mission. TASC. DHS initiative to consolidate financial, acquisition, and asset management systems, establish a single line of accounting, and standardize business processes. DHS headquarters consolidation. The collocation of the department by combining existing department and component leases and building out St. Elizabeths campus in Washington, D.C. Human resources information technology. Initiative to consolidate, replace, and modernize existing departmental and component payroll and personnel systems. Data center migration. Initiative to move DHS component agencies’ data systems from the agencies’ multiple existing data centers to two DHS consolidated centers. Homeland Security Presidential Directive 12 personal identification verification cards deployment. Provision of cards to DHS employees and contractors for use to access secure facilities, communications, and data. This initial management integration plan contained individual action plans for each of the seven initiatives. In March 2010, we met with DHS officials and provided oral and written feedback on the initial plan. We noted that, for example: the action plans lacked details on how the seven initiatives contribute to departmentwide management integration and links to the department’s overall strategy for transformation; the performance measures contained in the plans did not identify units of measure, baseline measurements, or target metrics that would be used to measure progress; the impediments and barriers described in the plans did not align with identified risks and the strategies for addressing these impediments and barriers; and the plans did not identify planned resources for carrying out these initiatives. DHS officials told us the department is working to enhance its initial management integration plan to include a framework for strengthening the department’s acquisition management. We plan to review the changes DHS is making to the initial management integration plan as part of our work for the 2011 high-risk update. Based on our work and recommendations on management integration, we have identified specific actions and outcomes for DHS that we believe will help the department address those management integration challenges that contributed to our designation of DHS implementation and transformation as high risk. Specifically, we believe that addressing these actions and outcomes within the individual management functional areas of acquisition, information technology, financial, and human capital management would help DHS to integrate those functions. For example, to successfully implement the TASC program, the Chief Financial Officer would need to work with the Chief Procurement Officer to establish effective mechanisms for overseeing the contractor selected to implement the TASC program; the Chief Information Officer to ensure that data conversions and system interfaces occur when required; and the Chief Human Capital Officer to ensure that relevant personnel at the department and component levels are trained on use of the TASC program once the system is implemented. In addition, DHS should revise its strategy for management integration to address the characteristics for such a strategy that we recommended in 2005. Performance measurement. DHS has not yet fully developed performance measures or put into place structures and processes to help ensure that the agency is managing for results. Performance measurement underpins DHS’s efforts to assess progress in strengthening programs and operations and in implementing corrective actions to integrate and strengthen management functions. DHS has developed performance goals and measures for its programs and reports on these goals and measures in its Annual Performance Report. However, DHS’s offices and components have not yet developed outcome-based performance measures to monitor, assess, and independently evaluate the effectiveness of their plans and performance. We have reported that the lack of outcome goals and measures hinders the department’s ability to effectively assess the results of program efforts and whether the department is using its resources efficiently. Over the past 2 years, we have worked with DHS to provide feedback on the department’s Government Performance and Results Act (GPRA) performance goals and measures through meetings with officials from the department and its offices and components. Our feedback has ranged from pointing out components’ limited use of outcome-oriented performance measures to assess the results or effectiveness of programs to raising questions about the steps taken by DHS or its components to ensure the reliability and verification of performance data. In response to this feedback and its own internal review efforts, DHS took action to develop and revise its GPRA performance goals and measures for some areas in an effort to strengthen its ability to assess its outcomes and progress in key management and mission areas. For example, from fiscal year 2008 to 2009, DHS reported adding 58 new measures, retiring 18 measures, and making description improvements to 67 existing performance measures. From fiscal year 2009 to 2010, DHS reported adding 32 new performance measures, retiring 24 measures, and making description improvements to 37 existing performance measures. DHS is continuing to work on developing and revising its performance measures to improve its focus on assessing results and outcomes and to align its measures to the goals and objectives established by the Quadrennial Homeland Security Review. In August and September 2010, we provided feedback on the department’s proposals for outcome-oriented performance measures aligned with the Quadrennial Homeland Security Review’s goals and objectives. We look forward to continuing working with the department to provide feedback to help strengthen its ability to assess the outcomes of its efforts. Since we first designated the implementation and transformation of DHS as high risk in 2003, the department has made progress in its transformation efforts in relation to the five criteria we established in November 2000 for removing agencies from the high-risk list, but has not yet fully addressed its transformation, management, and mission challenges, such as implementing effective management policies and deploying capabilities to secure the border and other sectors. In January 2009, we reported that DHS had developed its Integrated Strategy for High Risk Management outlining the department’s overall approach for managing its high-risk areas and the department’s processes for assessing risks and proposing initiatives and corrective actions to address its risks and challenges. We also reported that DHS had developed corrective action plans to address challenges in the areas of acquisition, financial, human capital, and information technology management. The corrective action plans addressed some, but not all, of the factors we consider in determining whether agencies can be removed from our high-risk list. Specifically, the strategy and corrective action plans identified senior officials with the responsibility for managing DHS’s transformation high- risk area and for implementing the corrective action plans. The strategy and plans defined some root causes for problems within management areas, identified initiatives and corrective actions to address the causes, and established milestones for completing initiatives and actions, though we noted that these elements could have been better defined to, for example, more clearly address the management challenges we have identified. The strategy also included a framework for DHS to monitor the implementation of its corrective action plans primarily through various departmentwide committees However, we reported that the strategy and corrective action plans did not contain measures to gauge the department’s progress and performance in implementing corrective actions, or identify the resources needed by DHS for carrying out the corrective actions identified. The strategy and corrective actions plans consistently cited limited resources as a challenge or constraint in implementing corrective actions. Further, we reported that required elements in the strategy and corrective action plans could be strengthened or clarified, including linking initiatives and corrective actions in the corrective action plans to root causes and milestones. In addition, we reported that while DHS had developed a framework for monitoring progress, the department had just begun to implement its corrective action plans. We recommended that for DHS to successfully transform into a more effective organization, it needed to (1) revise its Integrated Strategy for High Risk Management and related corrective action plans to better define root causes, include resources required to implement corrective actions, and identify key performance measures to gauge progress; and (2) continue to identify, refine, and implement corrective actions to improve management functions and address challenges. We have identified and communicated to DHS specific actions and outcomes that we believe the department needs to address within each of its management areas and for management integration. We believe that these actions and outcomes will help DHS address our high-risk criteria by, among other things, identifying root causes for problems within each management area, developing and implementing corrective actions to address those root causes, and demonstrating measurable, sustainable progress in implementing the correction actions. Since our 2009 high-risk update, DHS has taken actions to address the high-risk designation. For example, DHS and GAO have held regular, joint meetings, including periodic meetings that also involve Office of Management and Budget officials, to discuss the department’s progress in addressing the high risk designation and its overall transformation efforts. DHS and GAO have also discussed the department’s planned revisions to its Integrated Strategy for High Risk Management and corrective action plans for its management areas. However, as of September 2010, DHS has not yet provided us with an updated strategy or corrective actions plans to address the high-risk designation, as promised. DHS officials told us that the department is currently revising its strategy and will provide us with the updated strategy in the coming months. We will continue to assess DHS’s implementation and transformation efforts, including any updated strategy and corrective action plans, as part of our work for the 2011 high- risk update, which we plan to issue in January 2011. This concludes my prepared testimony. I would be happy to respond to any questions that members of the Subcommittee may have. For questions regarding this testimony, please contact Cathleen A. Berrick, Managing Director, Homeland Security and Justice at (202) 512-3404 or [email protected], or David C. Maurer, Director, Homeland Security and Justice at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this statement were Rebecca Gambler, Assistant Director; Minty Abraham; Labony Chakraborty; Tara Jayant; Thomas Lombardi; Emily Suarez-Harris; and Juan Tapia-Videla. Department of Homeland Security: Assessments of Selected Complex Acquisitions, GAO-10-588SP (Washington, D.C.: June 30, 2010). Department of Homeland Security: DHS Needs to Comprehensively Assess Its Foreign Language Needs and Capabilities and Identify Shortfalls, GAO-10-714 (Washington, D.C.: June 22, 2010). Department of Homeland Security: A Comprehensive Strategy Is Still Needed to Achieve Management Integration Departmentwide, GAO-10-318T (Washington, D.C.: Dec. 15, 2009). Financial Management Systems: DHS Faces Challenges to Successfully Consolidating Its Existing Disparate Systems, GAO-10-76 (Washington, D.C.: December 4, 2009). Department of Homeland Security: Actions Taken Toward Management Integration, but a Comprehensive Strategy Is Still Needed, GAO-10-131 (Washington, D.C.: November 20, 2009). Homeland Security: Despite Progress, DHS Continues to Be Challenge in Managing Its Multi-Billion Dollar Investment and Large-Scale Information Technology Systems, GAO-09-1002T (Washington, D.C.: September 15, 2009). Equal Opportunity Employment: DHS Has Opportunities to Better Identify and Address Barriers to EEO in Its Workforce, GAO-09-639 (Washington, D.C.: August 31, 2009). High-Risk Series: An Update, GAO-09-271 (Washington, D.C.: January 2009). Department of Homeland Security: Progress Made in Implementation of Management Functions, but More Work Remains, GAO-08-646T (Washington, D.C.: April 9, 2008). Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions, GAO-07-454 (Washington, D.C.: August 17, 2007). Homeland Security: Departmentwide Integrated Financial Management Systems Remain a Challenge, GAO-07-536 (Washington, D.C.: June 21, 2007). Department of Homeland Security: A Comprehensive and Sustained Approach Needed to Achieve Management Integration, GAO-05-139 (Washington, D.C.: March 16, 2005). Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations, GAO-03-669 (Washington, D.C.: July 2, 2003). Determining Performance and Accountability Challenges and High Risks, GAO-01-159SP (Washington, D.C. November 2000). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2003, GAO has designated implementing and transforming the Department of Homeland Security (DHS) as high risk because DHS had to transform 22 agencies--several with significant management challenges--into one department, and failure to effectively address its mission and management risks could have serious consequences for national and economic security. This high-risk area includes challenges in management functional areas, including acquisition, information technology, financial, and human capital management; the impact of those challenges on mission implementation; and management integration. GAO has reported that DHS's transformation is a significant effort that will take years to achieve. This testimony discusses DHS's progress and actions remaining in (1) implementing its management functions; (2) integrating those functions and strengthening performance measurement; and (3) addressing GAO's high-risk designation. This testimony is based on GAO's prior reports on DHS transformation and management issues and updated information on these issues obtained from December 2009 through September 2010. DHS has made progress in implementing its management functions, but additional actions are needed to strengthen DHS's efforts in these areas. (1) DHS has revised its acquisition management oversight policies, and its senior-level Acquisition Review Board reviewed 24 major acquisition programs in fiscal years 2008 and 2009. However, more than 40 major programs had not been reviewed, and DHS does not yet have accurate cost estimates for most of its major programs. (2) DHS has undertaken efforts to establish information technology management controls and capabilities, but its progress has been uneven and major information technology programs, such as the SBInet virtual fence, have not met capability, benefit, cost, and schedule expectations. (3) DHS has developed corrective action plans to address its financial management weaknesses. However, DHS has been unable to obtain an unqualified audit opinion on its financial statements, and for fiscal year 2009, the independent auditor identified six material weaknesses in DHS's internal controls. Further, DHS has not yet implemented a consolidated departmentwide financial management system. (4) DHS has issued plans for strategic human capital management and employee development. Further, its scores on the Partnership for Public Service's 2010 rankings of Best Places to Work in the Federal Government improved from prior years, yet DHS was ranked 28 out of 32 agencies on scores for employee satisfaction and commitment. DHS has also taken action to integrate its management functions by, for example, establishing common policies within management functions. The Implementing Recommendations of the 9/11 Commission Act of 2007 required DHS to develop a strategy for management integration. In a 2005 report GAO recommended that a management integration strategy contain priorities and goals. DHS developed an initial plan in February 2010 that identified seven initiatives for achieving management integration. While a step in the right direction, among other things, the plan lacked details on how the initiatives contributed to departmentwide management integration. DHS is working to enhance its management integration plan, which GAO will review as part of the 2011 high-risk update. DHS also has not yet developed performance measures to fully assess its progress in integrating management functions. Since GAO first designated DHS's transformation as high risk, DHS has made progress in transforming into a fully functioning department. However, it has not yet fully addressed its transformation, management, and mission challenges, such as implementing effective management policies and deploying capabilities to secure the border and other sectors. In 2009 GAO reported that DHS had developed a strategy for managing its high-risk areas and corrective action plans to address its management challenges. While these documents identified some root causes and corrective actions, GAO reported that they could be improved by DHS identifying resources needed for implementing corrective actions and measures for assessing progress. This testimony contains no new recommendations. GAO has made over 100 recommendations to DHS since 2003 to strengthen its management and integration efforts. DHS has implemented many of these recommendations and is in the process of implementing others. |
Given the consequences of a severe influenza pandemic, in 2006, GAO developed a strategy for our work that would help support Congress’s decision making and oversight related to pandemic planning. Our strategy was built on a large body of work spanning two decades, including reviews of government responses to prior disasters such as Hurricanes Andrew and Katrina, the devastation caused by the 9/11 terror attacks, efforts to address the Year 2000 (Y2K) computer challenges, and assessments of public health capacities in the face of bioterrorism and emerging infectious diseases such as Severe Acute Respiratory Syndrome (SARS). The strategy was built around six key themes as shown in figure 1. While all of these themes are interrelated, our earlier work underscored the importance of leadership, authority, and coordination, a theme that touches on all aspects of preparing for, responding to, and recovering from an influenza pandemic. Influenza pandemic—caused by a novel strain of influenza virus for which there is little resistance and which therefore is highly transmissible among humans—continues to be a real and significant threat facing the United States and the world. Unlike incidents that are discretely bounded in space or time (e.g., most natural or man-made disasters), an influenza pandemic is not a singular event, but is likely to come in waves, each lasting weeks or months, and pass through communities of all sizes across the nation and the world simultaneously. However, the current H1N1 pandemic seems to be relatively mild, although widespread. The history of an influenza pandemic suggests it could return in a second wave this fall or winter in a more virulent form. While a pandemic will not directly damage physical infrastructure such as power lines or computer systems, it threatens the operation of critical systems by potentially removing the essential personnel needed to operate them from the workplace for weeks or months. In a severe pandemic, absences attributable to illnesses, the need to care for ill family members, and fear of infection may, according to the Centers for Disease Control and Prevention (CDC), reach a projected 40 percent during the peak weeks of a community outbreak, with lower rates of absence during the weeks before and after the peak. In addition, an influenza pandemic could result in 200,000 to 2 million deaths in the United States, depending on its severity. The President’s Homeland Security Council (HSC) took an active approach to this potential disaster by, among other things, issuing the National Strategy for Pandemic Influenza (National Pandemic Strategy) in November 2005, and the National Pandemic Implementation Plan in May 2006. The National Pandemic Strategy is intended to provide a high- level overview of the approach that the federal government will take to prepare for and respond to an influenza pandemic. It also provides expectations for nonfederal entities—including state, local, and tribal governments; the private sector; international partners; and individuals— to prepare themselves and their communities. The National Pandemic Implementation Plan is intended to lay out broad implementation requirements and responsibilities among the appropriate federal agencies and clearly define expectations for nonfederal entities. The Plan contains 324 action items related to these requirements, responsibilities, and expectations, most of which were to be completed before or by May 2009. HSC publicly reported on the status of the action items that were to be completed by 6 months, 1 year, and 2 years in December 2006, July 2007, and October 2008 respectively. HSC indicated in its October 2008 progress report that 75 percent of the action items have been completed. We have ongoing work for this committee assessing the status of implementing this plan which we expect to report on in the fall of 2009. Federal government leadership roles and responsibilities for pandemic preparedness and response are evolving, and will require further testing before the relationships among the many federal leadership positions are well understood. Such clarity in leadership is even more crucial now, given the change in administration and the associated transition of senior federal officials. Most of these federal leadership roles involve shared responsibilities between the Department of Health and Human Services (HHS) and the Department of Homeland Security (DHS), and it is not clear how these would work in practice. According to the National Pandemic Strategy and Plan, the Secretary of Health and Human Services is to lead the federal medical response to a pandemic, and the Secretary of Homeland Security will lead the overall domestic incident management and federal coordination. In addition, under the Post-Katrina Emergency Management Reform Act of 2006, the Administrator of the Federal Emergency Management Agency (FEMA) was designated as the principal domestic emergency management advisor to the President, the HSC, and the Secretary of Homeland Security, adding further complexity to the leadership structure in the case of a pandemic. To assist in planning and coordinating efforts to respond to a pandemic, in December 2006 the Secretary of Homeland Security predesignated a national Principal Federal Official (PFO) for influenza pandemic and established five pandemic regions each with a regional PFO and Federal Coordinating Officers (FCO) for influenza pandemic. PFOs are responsible for facilitating federal domestic incident planning and coordination, and FCOs are responsible for coordinating federal resources support in a presidentially declared major disaster or emergency. However, the relationship of these roles to each other as well as with other leadership roles in a pandemic is unclear. Moreover, as we testified in July 2007, state and local first responders were still uncertain about the need for both FCOs and PFOs and how they would work together in disaster response. Accordingly, we recommended in our August 2007 report on federal leadership roles and the National Pandemic Strategy that DHS and HHS develop rigorous testing, training, and exercises for influenza pandemic to ensure that federal leadership roles and responsibilities for a pandemic are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. In response to our recommendation, HHS and DHS officials stated in January 2009 that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. In addition to concerns about clarifying federal roles and responsibilities for a pandemic and how shared leadership roles would work in practice, private sector officials told us that they are unclear about the respective roles and responsibilities of the federal and state governments during a pandemic emergency. The National Pandemic Implementation Plan states that in the event of an influenza pandemic, the distributed nature and sheer burden of the disease across the nation would mean that the federal government’s support to any particular community is likely to be limited, with the primary response to a pandemic coming from states and local communities. Further, federal and private sector representatives we interviewed at the time of our October 2007 report identified several key challenges they face in coordinating federal and private sector efforts to protect the nation’s critical infrastructure in the event of an influenza pandemic. One of these was a lack of clarity regarding the roles and responsibilities of federal and state governments on issues such as state border closures and influenza pandemic vaccine distribution. Coordination Mechanisms Mechanisms and networks for collaboration and coordination on pandemic preparedness between federal and state governments and the private sector exist, but they could be better utilized. In some instances, the federal and private sectors are working together through a set of coordinating councils, including sector-specific and cross-sector councils. To help protect the nation’s critical infrastructure, DHS created these coordinating councils as the primary means of coordinating government and private sector efforts for industry sectors such as energy, food and agriculture, telecommunications, transportation, and water. Our October 2007 report found that DHS has used these critical infrastructure coordinating councils primarily to share pandemic information across sectors and government levels rather than to address many of the challenges identified by sector representatives, such as clarifying the roles and responsibilities between federal and state governments. We recommended in the October 2007 report that DHS encourage the councils to consider and address the range of coordination challenges in a potential influenza pandemic between the public and private sectors for critical infrastructure. DHS concurred with our recommendation and DHS officials informed us at the time of our February 2009 report that the department was working on initiatives to address it, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of “webinars” with a number of the sectors. Federal executive boards (FEB) bring together federal agency and community leaders in major metropolitan areas outside of Washington, D.C., to discuss issues of common interest, including an influenza pandemic. The Office of Personnel Management (OPM), which provides direction to the FEBs, and the FEBs have designated emergency preparedness, security, and safety as an FEB core function. The FEB’s emergency support role with its regional focus may make the boards a valuable asset in pandemic preparedness and response. As a natural outgrowth of their general civic activities and through activities such as hosting emergency preparedness training, some of the boards have established relationships with, for example, federal, state, and local governments; emergency management officials; first responders; and health officials in their communities. In a May 2007 report on the FEBs’ ability to contribute to emergency operations, we found that many of the selected FEBs included in our review were building capacity for influenza pandemic response within their member agencies and community organizations by hosting influenza pandemic training and exercises. We recommended that, since FEBs are well positioned within local communities to bring together federal agency and community leaders, the Director of OPM work with FEMA to formally define the FEBs’ role in emergency planning and response. As a result of our recommendation, FEBs were included in the National Response Framework (NRF) in January 2008 as one of the regional support structures that have the potential to contribute to development of situational awareness during an emergency. OPM and FEMA also signed a memorandum of understanding in August 2008 in which FEBs and FEMA agreed to work collaboratively in carrying out their respective roles in the promotion of the national emergency response system. International disease surveillance and detection efforts serve as an early warning system that could prevent the spread of an influenza pandemic outbreak. The United States and its international partners are involved in efforts to improve pandemic surveillance, including diagnostic capabilities, so that outbreaks can be quickly detected. Yet, as reported in 2007, international capacity for surveillance has many weaknesses, particularly in developing countries. As a result, assessments of the risks of the emergence of influenza pandemic by U.S. agencies and international organizations, which were used to target assistance to countries at risk, were based on insufficiently detailed or incomplete information, limiting their value for comprehensive comparisons of risk levels by country. The National Pandemic Strategy and National Pandemic Implementation Plan are important first steps in guiding national preparedness. However, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities. In our August 2007 report on the National Pandemic Strategy and Implementation Plan, we found that while these documents are an important first step in guiding national preparedness, they do not fully address all six characteristics of an effective national strategy, as identified in our work. The documents fully address only one of the six characteristics, by reflecting a clear description and understanding of problems to be addressed. Further, the National Pandemic Strategy and Implementation Plan do not address one characteristic at all, containing no discussion of what it will cost, where resources will be targeted to achieve the maximum benefits, and how it will balance benefits, risks, and costs. Moreover, the documents do not provide a picture of priorities or how adjustments might be made in view of resource constraints. Although the remaining four characteristics are partially addressed, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities. For example, state and local jurisdictions that will play crucial roles in preparing for and responding to a pandemic were not directly involved in developing the National Pandemic Implementation Plan, even though it relies on these stakeholders’ efforts. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities are clearly understood and agreed upon. Further, relationships and priorities among actions were not clearly described, performance measures were not always linked to results, and insufficient information was provided about how the documents are integrated with other response-related plans, such as the NRF. We recommended that the HSC establish a process for updating the National Pandemic Implementation Plan and that the updated plan should address these and other gaps. HSC did not comment on our recommendation and has not indicated if it plans to implement it. The National Pandemic Implementation Plan required federal agencies to develop operational plans for protecting their employees and maintaining essential operations and services in the event of a pandemic. In our June 2009 report, we found that federal agency progress in pandemic planning is uneven. We surveyed the pandemic coordinators from the 24 agencies covered by the Chief Financial Officers Act of 1990, which we supplemented with a case study approach of 3 agencies. We used the survey to get an overview of governmentwide pandemic influenza preparedness efforts. The survey questions asked about pandemic plans; essential functions other than first response that employees cannot perform remotely; protective measures, such as procuring pharmaceutical interventions; social distancing strategies; information technology testing; and communication of human capital pandemic policies. Although all of the surveyed agencies reported being engaged in planning for pandemic influenza to some degree, several agencies reported that they were still in the early stages of developing their pandemic plans and their measures to protect their workforce. For example, several agencies responded that they had yet to identify essential functions during a pandemic that cannot be performed remotely. And, although many of the agencies’ pandemic plans rely on telework to carry out their functions, 5 agencies reported testing their information technology capability to little or no extent. The three case study agencies also showed differences in the degree to which their individual facilities had operational pandemic plans. The Bureau of Prisons’ correctional workers had only recently been required to develop pandemic plans for their correctional facilities. The Department of Treasury’s Financial Management Service, which has production staff involved in disbursing federal payments such as Social Security checks, had pandemic plans for its four regional centers and had stockpiled personal protective equipment. By contrast, the Federal Aviation Administration’s air traffic control management facilities, where air traffic controllers work, had not yet developed facility pandemic plans or incorporated pandemic plans into their all-hazards contingency plans. We reported in June 2008 that, according to CDC, all 50 states and the 3 localities that received federal pandemic funds have developed influenza pandemic plans and conducted pandemic exercises in accordance with federal funding guidance. A portion of the $5.62 billion that Congress appropriated in supplemental funding to HHS for pandemic preparedness in 2006—$600 million—was specifically provided for state and local planning and exercising. All 10 localities that we reviewed in depth had also developed plans and conducted exercises, and had incorporated lessons learned from pandemic exercises into their planning. However, an HHS-led interagency assessment of states’ plans found on average that states had “many major gaps” in their influenza pandemic plans in 16 of 22 priority areas, such as school closure policies and community containment, which are community-level interventions designed to reduce the transmission of a pandemic virus. The remaining 6 priority areas were rated as having “a few major gaps.” Subsequently, HHS led another interagency assessment of state influenza pandemic plans and reported in January 2009 that although they had made important progress, most states still had major gaps in their pandemic plans. As we had reported in June 2008, HHS, in coordination with DHS and other federal agencies, had convened a series of regional workshops for states in five influenza pandemic regions across the country. Because these workshops could be a useful model for sharing information and building relationships, we recommended that HHS and DHS, in coordination with other federal agencies, convene additional meetings with states to address the gaps in the states’ pandemic plans. As reported in February 2009, HHS and DHS generally concurred with our recommendation, but have not yet held these additional meetings. HHS and DHS indicated at the time of our February 2009 report that while no additional meetings had been planned, states will have to continuously update their pandemic plans and submit them for review. We have also reported on the need for more guidance from the federal government to help states and localities in their planning. In June 2008, we reported that although the federal government has provided a variety of guidance, officials of the states and localities we reviewed told us that they would welcome additional guidance from the federal government in a number of areas, such as community containment, to help them to better plan and exercise for an influenza pandemic. Other state and local officials have identified similar concerns. According to the National Governors Association’s (NGA) September 2008 issue brief on states’ pandemic preparedness, states are concerned about a wide range of school-related issues, including when to close schools or dismiss students, how to maintain curriculum continuity during closures, and how to identify the appropriate time at which classes could resume. NGA also reported that states generally have very little awareness of the status of disease outbreaks, either in real time or in near real time, to allow them to know precisely when to recommend a school closure or reopening in a particular area. NGA reported that states wanted more guidance in the following areas: (1) workforce policies for the health care, public safety, and private sectors; (2) schools; (3) situational awareness such as information on the arrival or departure of a disease in a particular state, county, or community; (4) public involvement; and (5) public-private sector engagement. The private sector has also been planning for an influenza pandemic, but many challenges remain. To better protect critical infrastructure, federal agencies and the private sector have worked together across a number of sectors to plan for a pandemic, including developing general pandemic preparedness guidance, such as checklists for continuity of business operations during a pandemic. However, federal and private sector representatives have acknowledged that sustaining preparedness and readiness efforts for an influenza pandemic is a major challenge, primarily because of the uncertainty associated with a pandemic, limited financial and human resources, and the need to balance pandemic preparedness with other, more immediate, priorities, such as responding to outbreaks of foodborne illnesses in the food sector and, now, the effects of the financial crisis. In our March 2007 report on preparedness for an influenza pandemic in one of these critical infrastructure sectors—financial markets—we found that despite significant progress in preparing markets to withstand potential disease pandemics, securities and banking regulators could take additional steps to improve the readiness of the securities markets. The seven organizations that we reviewed—which included exchanges, clearing organizations, and payment-system processors—were working on planning and preparation efforts to reduce the likelihood that a worldwide influenza pandemic would disrupt their critical operations. However, only one of the seven had completed a formal plan. To increase the likelihood that the securities markets will be able to function during a pandemic, we recommended that the Chairman, Federal Reserve; the Comptroller of the Currency; and the Chairman, Securities and Exchange Commission (SEC), consider taking additional actions to ensure that market participants adequately prepare for a pandemic outbreak. In response to our recommendation, the Federal Reserve and the Office of the Comptroller of the Currency, in conjunction with the Federal Financial Institutions Examination Council and the SEC directed all banking organizations under their supervision to ensure that the pandemic plans the financial institutions have in place are adequate to maintain critical operations during a severe outbreak. SEC issued similar requirements to the major securities industry market organizations. Improving the nation’s response capability to catastrophic disasters, such as an influenza pandemic, is essential. Following a mass casualty event, health care systems would need the ability to adequately care for a large number of patients or patients with unusual or highly specialized medical needs. The ability of local or regional health care systems to deliver services could be compromised, at least in the short term, because the volume of patients would far exceed the available hospital beds, medical personnel, pharmaceuticals, equipment, and supplies. Further, in natural and man-made disasters, assistance from other states may be used to increase capacity, but in a pandemic, states would likely be reluctant to provide assistance to each other due to scarce resources and fears of infection. Over the last few years, Congress has provided over $13 billion in supplemental funding for pandemic preparedness. The $5.62 billion that Congress provided in supplemental funding to HHS in 2006 was for, among other things, (1) monitoring disease spread to support rapid response, (2) developing vaccines and vaccine production capacity, (3) stockpiling antivirals and other countermeasures, (4) upgrading state and local capacity, and (5) upgrading laboratories and research at CDC. The majority of this supplemental funding—about 77 percent—was allocated for developing antivirals and vaccines for a pandemic, and purchasing medical supplies. Also, a portion of the funding that went to states and localities for preparedness activities—$170 million—was allocated for state antiviral purchases for their state stockpiles. In June 2009, Congress approved and the President signed a supplemental appropriations act that included $7.7 billion for pandemic flu preparedness, including the development and purchase of vaccine, antivirals, necessary medical supplies, diagnostics, and other surveillance tools and to assist international efforts and respond to international needs relating to the 2009–H1N1 influenza outbreak. This amount included $1.85 billion to be available immediately and $5.8 billion to be available subsequently in the amounts designated by the President as emergency funding requirements. On July 10, 2009, HHS announced its plans to use the $350 million designated for upgrading state and local capacity for additional grants to states and territories to prepare for the H1N1 pandemic and seasonal influenza. State public health departments will receive $260 million, and hospitals will receive $90 million of these grant funds. An outbreak will require additional capacity in many areas, including the procurement of additional patient treatment space and the acquisition and distribution of medical and other critical supplies, such as antivirals and vaccines for an influenza pandemic. In a severe pandemic, the demand would exceed the available hospital bed capacity, which would be further challenged by the existing shortages of health care providers and their potential high rates of absenteeism. In addition, the availability of antivirals and vaccines could be inadequate to meet demand due to limited production, distribution, and administration capacity. The federal government has provided some guidance in addition to funding to help states plan for additional capacity. For example, the federal government provided guidance for states to use when preparing for medical surge and on prioritizing target groups for an influenza pandemic vaccine. Some state officials reported, however, that they had not begun work on altered standards of care guidelines, that is, for providing care while allocating scarce equipment, supplies, and personnel in a way that saves the largest number of lives in mass casualty event, or had not completed drafting guidelines, because of the difficulty of addressing the medical, ethical, and legal issues involved. We recommended that HHS serve as a clearinghouse for sharing among the states altered standards of care guidelines developed by individual states or medical experts. HHS did not comment on the recommendation, and it has not indicated if it plans to implement it. Further, in our June 2008 report on state and local planning and exercising efforts for an influenza pandemic, we found that state and local officials reported that they wanted federal influenza pandemic guidance on facilitating medical surge, which was also one of the areas that the HHS-led assessment rated as having “many major gaps” nationally among states’ influenza pandemic plans. The National Pandemic Implementation Plan emphasizes that government and public health officials must communicate clearly and continuously with the public throughout a pandemic. Accordingly, HHS, DHS, and other federal agencies have shared pandemic-related information in a number of ways, such as through Web sites, guidance, and state summits and meetings, and are using established networks, including coordinating councils for critical infrastructure protection, to share information about pandemic preparedness, response, and recovery. Federal agencies have established an influenza pandemic Web site (www.pandemicflu.gov) and disseminated pandemic preparedness checklists for workplaces, individuals and families, schools, health care, community organizations, and state and local governments. However, state and local officials from all of the states and localities we interviewed for our June 2008 report on state and local pandemic planning and exercising, wanted additional influenza pandemic guidance from the federal government on specific topics, on how to implement community interventions such as closing schools, fatality management, and facilitating medical surge. Although the federal government had issued some guidance at the time of our review, it may not have reached state and local officials or may not have addressed the particular concerns or circumstances of the state and local officials we interviewed. More recently, CDC has issued additional guidance on a number of topics related to responding to the H1N1 outbreak. CDC issued interim guidance on school closures which originally recommended that schools with confirmed H1N1 influenza close. Once it became more clear that the disease severity of H1N1 was similar to that of seasonal influenza and that the virus had already spread within communities, CDC determined that school closure would be less effective as a measure of control and issued updated guidance recommending that schools not close for suspected or confirmed cases of influenza. However, the change in guidance caused confusion, underscoring the importance of clear and continuous communication with the public throughout a pandemic. In addition, private sector officials have told us that they would like clarification about the respective roles and responsibilities of the federal and state governments during an influenza pandemic emergency, such as in state border closures and influenza pandemic vaccine distribution. While the National Pandemic Strategy and Implementation Plan identify overarching goals and objectives for pandemic planning, the documents are not altogether clear on the roles, responsibilities, and requirements to carry out the plan. Some of the action items in the National Pandemic Implementation Plan, particularly those that are to be completed by state, local, and tribal governments or the private sector, do not identify an entity responsible for carrying out the action. Most of the implementation plan’s performance measures consist of actions to be completed, such as disseminating guidance, but the measures are not always clearly linked with intended results. For example, one action item asked that all HHS-, Department of Defense-, and Veterans Administration-funded hospitals and health facilities develop, test, and be prepared to implement infection control campaigns for pandemic influenza within 3 months. However, the associated performance measure is not clearly linked to the intended result. This performance measure states that infection control guidance should be developed and disseminated on www.pandemicflu.gov and other channels. This action would not directly result in developing, testing, and preparing to implement infection control campaigns. This lack of clear linkage makes it difficult to ascertain whether progress has in fact been made toward achieving the national goals and objectives described in the National Pandemic Strategy and Implementation Plan. Without a clear linkage to anticipated results, these measures of activities do not give an indication of whether the purpose of the activity is achieved. In addition, as discussed earlier, the National Pandemic Implementation Plan does not establish priorities among its 324 action items, which becomes especially important as agencies and other parties strive to effectively manage scarce resources and ensure that the most important steps are accomplished. Moreover, the National Pandemic Strategy and its Implementation Plan do not provide information on the financial resources needed to implement them, which is one of six characteristics of an effective national strategy that we have identified. As a result, the documents do not provide a picture of priorities or how adjustments might be made in view of resource constraints. As discussed earlier, the National Pandemic Implementation Plan also required federal agencies to develop operational pandemic plans to describe, among other requirements, how each agency will protect its workforce and maintain essential operations and services in the event of a pandemic. We recently reported, however, that there is no mechanism in place to monitor and report on agencies’ progress in developing these plans. Under the Implementation Plan, DHS was charged with this responsibility, but instead the HSC simply requested that agencies certify to the council that they were addressing in their plans the applicable elements of a pandemic checklist. The certification process did not provide for monitoring and reporting on agencies’ abilities to continue operations in the event of a pandemic while protecting their employees. Moreover, even as envisioned under the Implementation Plan, the report was to be directed to the Executive Office of the President with no provision for the report to be made available to Congress. As noted earlier, given agencies’ uneven progress in developing their pandemic plans, monitoring and reporting would enhance agencies’ accountability to protect their employees during a pandemic. We therefore recommended that the HSC request that the Secretary of Homeland Security monitor and report to the Executive Office of the President on the readiness of agencies to continue their operations while protecting their employees in the event of a pandemic. We also suggested that to help support its oversight responsibilities, Congress may want to consider requiring DHS to report to it on agencies’ progress in developing and implementing their plans, including any key challenges and gaps in the plans. The HSC noted that it will give serious consideration to the report findings and recommendations, and DHS said the report findings and recommendations will contribute to its efforts to ensure that government entities are well prepared for what may come next. The current H1N1 influenza pandemic should serve as a powerful reminder that the threat of a more virulent pandemic, which seemed to fade from public awareness in recent years, never really disappeared. While federal agencies have taken action on many of our recommendations, about half the recommendations that we have made over the past 3 years are still not fully implemented. It is essential, given the change in administration and the associated transition of senior federal officials, that the shared leadership roles that have been established between HHS and DHS, along with other responsible federal officials, are tested in rigorous tests and exercises. Likewise, DHS should continue to work with other federal agencies and private sector members of the critical infrastructure coordinating councils to help address the challenges of coordination and clarify roles and responsibilities of federal and state governments. DHS and HHS should also, in coordination with other federal agencies, continue to work with states and local governments to help them address identified gaps in their pandemic planning. Moreover, the 3-year period covered by the National Pandemic Implementation Plan is now over and it will be important for HSC to establish a process for updating the National Pandemic Implementation Plan so that the updated plan can address the gaps we have identified, as well as lessons learned from the current H1N1 outbreak. Finally, greater monitoring and reporting of agencies’ progress in plans to protect their workers during a pandemic are needed to insure the readiness of agencies to continue operations while protecting their employees in the event of a pandemic. Pandemic influenzas, as I noted earlier, differ from other types of disasters in that they are not necessarily discrete events. While the current H1N1 pandemic seems to be relatively mild, the virus could become more virulent this fall or winter. Given this risk, the administration and federal agencies should use this opportunity to turn their attention to filling in some of the planning and preparedness gaps our work has pointed out, while time is still on our side. Chairman Thompson and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For further information regarding this statement, please contact Bernice Steinhardt, Director, Strategic Issues, at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Sarah Veale, (Assistant Director), Maya Chakko, David Fox, Bill Doherty, Ellen Grady, Karin Fangman, and members of GAO’s Pandemic Working Group. The Homeland Security Council should request that the Secretary of Homeland Security monitor and report to the Executive Office of the President on the readiness of agencies to continue their operations while protecting their employees in the event of an influenza pandemic. The Homeland Security Council commented that the council will give serious consideration to the report’s findings and recommendations. DHS commented that the report’s findings and recommendations will contribute to its efforts to ensure that government entities are well prepared for what may come next. The Secretary of Health and Human Services should expeditiously finalize guidance to assist state and local jurisdictions to determine how to effectively use limited supplies of antivirals and pre- pandemic vaccine in a pandemic, including prioritizing target groups for pre-pandemic vaccine. In December 2008, HHS released final guidance on antiviral drug use during an influenza pandemic. HHS officials informed us that they are drafting the guidance on pre-pandemic influenza vaccination. The Secretaries of Health and Human Services and Homeland Security should, in coordination with other federal agencies, convene additional meetings of the states in the five federal influenza pandemic regions to help them address identified gaps in their planning. HHS and DHS officials indicated that while no additional meetings are planned at this time, states will have to continuously update their pandemic plans and submit them for review. The Secretary of Homeland Security should work with sector-specific agencies and lead efforts to encourage the government and private sector members of the councils to consider and help address the challenges that will require coordination between the federal and private sectors involved with critical infrastructure and within the various sectors, in advance of, as well as during, a pandemic. DHS officials informed us that the department is working on initiatives, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of webinars with a number of the sectors. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy, GAO-07-781, August 14, 2007 (1) The Secretaries of Homeland Security and Health and Human Services should work together to develop and conduct rigorous testing, training, and exercises for an influenza pandemic to ensure that the federal leadership roles are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. Once the leadership roles have been clarified through testing, training, and exercising, the Secretaries of Homeland Security and Health and Human Services should ensure that these roles are clearly understood by state, local, and tribal governments; the private and nonprofit sectors; and the international community. (1) HHS and DHS officials stated that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning, GAO-07-1257T, September 26, 2007 (2) The Homeland Security Council should establish a specific process and time frame for updating the National Pandemic Implementation Plan. The process should involve key nonfederal stakeholders and incorporate lessons learned from exercises and other sources. The National Pandemic Implementation Plan should also be improved by including the following information in the next update: (a) resources and investments needed to complete the action items and where they should be targeted, (b) a process and schedule for monitoring and publicly reporting on progress made on completing the action items, (c) clearer linkages with other strategies and plans, and (d) clearer descriptions of relationships or priorities among action items and greater use of outcome-focused performance measures. (2) HSC did not comment on the recommendation and has not indicated if it plans to implement it. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response, GAO-07-652, June 11, 2007 (1) The Secretaries of Agriculture and Homeland Security should develop a memorandum of understanding that describes how USDA and DHS will work together in the event of a declared presidential emergency or major disaster, or an Incident of National Significance, and test the effectiveness of this coordination during exercises. (1) Both USDA and DHS officials told us that they have taken preliminary steps to develop additional clarity and better define their coordination roles. For example, the two agencies meet on a regular basis to discuss such coordination. (2) The Secretary of Agriculture should, in consultation with other federal agencies, states, and the poultry industry identify the capabilities necessary to respond to a probable scenario or scenarios for an outbreak of highly pathogenic avian influenza. The Secretary of Agriculture should also use this information to develop a response plan that identifies the critical tasks for responding to the selected outbreak scenario and, for each task, identifies the responsible entities, the location of resources needed, time frames, and completion status. Finally, the Secretary of Agriculture should test these capabilities in ongoing exercises to identify gaps and ways to overcome those gaps. (2) USDA officials told us that it has created a draft preparedness and response plan that identifies federal, state, and local actions, timelines, and responsibilities for responding to highly pathogenic avian influenza, but the plan has not been issued yet. (3) The Secretary of Agriculture should develop standard criteria for the components of state response plans for highly pathogenic avian influenza, enabling states to develop more complete plans and enabling USDA officials to more effectively review them. (3) USDA told us that it has drafted large volumes of guidance documents that are available on a secure Web site. However, the guidance is still under review and it is not clear what standard criteria from these documents USDA officials and states should apply when developing and reviewing plans. (4) The Secretary of Agriculture should focus additional work with states on how to overcome potential problems associated with unresolved issues, such as the difficulty in locating backyard birds and disposing of carcasses and materials. (4) USDA officials have told us that the agency has developed online tools to help states make effective decisions about carcass disposal. In addition, USDA has created a secure Internet site that contains draft guidance for disease response, including highly pathogenic avian influenza, and it includes a discussion about many of the unresolved issues. (5) The Secretary of Agriculture should determine the amount of antiviral medication USDA would need in order to protect animal health responders, given various highly pathogenic avian influenza scenarios. The Secretary of Agriculture should also determine how to obtain and provide supplies within 24 hours of an outbreak. (5) USDA officials told us that the National Veterinary Stockpile contains enough antiviral medication to protect 3,000 animal health responders for 40 days. However, USDA has yet to determine the number of individuals who would need medicine based on a calculation of those exposed to the virus under a specific scenario. Further, USDA officials told us that a contract for additional medication for the stockpile has not yet been secured, which would better ensure that medications are available in the event of an outbreak of highly pathogenic avian influenza. Influenza Pandemic: Greater Agency Accountability Needed to Protect Federal Workers in the Event of a Pandemic. GAO-09-783T. Washington, D.C.: June 16, 2009. Influenza Pandemic: Increased Agency Accountability Could Help Protect Federal Employees Serving the Public in the Event of a Pandemic. GAO-09-404. Washington, D.C.: June 12, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington, D.C.: February 26, 2009. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Influenza Pandemic: Federal Agencies Should Continue to Assist States to Address Gaps in Pandemic Planning. GAO-08-539. Washington, D.C.: June 19, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Pandemic: Federal Executive Boards’ Ability to Contribute to Pandemic Preparedness. GAO-07-1259T. Washington, D.C.: September 28, 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D.C.: September 26, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Emergency Management Assistance Compact: Enhancing EMAC’s Collaborative and Administrative Capacity Should Improve National Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the current H1N1 outbreak underscores, an influenza pandemic remains a real threat to our nation. Over the past 3 years, GAO conducted a body of work, consisting of 12 reports and 4 testimonies, to help the nation better prepare for a possible pandemic. In February 2009, GAO synthesized the results of most of this work and, in June 2009, GAO issued an additional report on agency accountability for protecting the federal workforce in the event of a pandemic. GAO's work points out that while a number of actions have been taken to plan for a pandemic, including developing a national strategy and implementation plan, many gaps in pandemic planning and preparedness still remain. This statement covers six thematic areas: (1) leadership, authority, and coordination; (2) detecting threats and managing risks; (3) planning, training, and exercising; (4) capacity to respond and recover; (5) information sharing and communication; and (6) performance and accountability. (1) Leadership roles and responsibilities for an influenza pandemic need to be clarified, tested, and exercised, and existing coordination mechanisms, such as critical infrastructure coordinating councils, could be better utilized to address challenges in coordination between the federal, state, and local governments and the private sector in preparing for a pandemic. (2)Efforts are underway to improve the surveillance and detection of pandemic-related threats, but targeting assistance to countries at the greatest risk has been based on incomplete information, particularly from developing countries. (3) Pandemic planning and exercising has occurred at the federal, state, and local government levels, but important planning gaps remain at all levels of government. At the federal level, agency planning to maintain essential operations and services while protecting their employees in the event of a pandemic is uneven. (4) Further actions are needed to address the capacity to respond to and recover from an influenza pandemic, which will require additional capacity in patient treatment space, and the acquisition and distribution of medical and other critical supplies, such as antivirals and vaccines. (5) Federal agencies have provided considerable guidance and pandemic-related information to state and local governments, but could augment their efforts with additional information on school closures, state border closures, and other topics. (6) Performance monitoring and accountability for pandemic preparedness needs strengthening. For example, the May 2006 National Strategy for Pandemic Influenza Implementation Plan does not establish priorities among its 324 action items and does not provide information on the financial resources needed to implement them. Also, greater agency accountability is needed to protect federal workers in the event of a pandemic because there is no mechanism in place to monitor and report on agencies' progress in developing workforce pandemic plans. The current H1N1 pandemic should serve as a powerful reminder that the threat of a pandemic influenza, which seemed to fade from public awareness in recent years, never really disappeared. While federal agencies have taken action on 13 of GAO's 24 recommendations, 11 of the recommendations that GAO has made over the past 3 years have not been fully implemented. With the possibility that the H1N1 virus could become more virulent this fall or winter, the administration and federal agencies should use this time to turn their attention to filling in the planning and preparedness gaps GAO's work has pointed out. |
General and flag officers’ quarters are government-provided quarters for military officers with the rank of brigadier general or rear admiral (lower half) (O-7) and above. The services have a total of 685 general and flag officer quarters, of which 372, or about 54 percent, are considered historic as table 1 below shows. The general policy in the military services is that general and flag officer housing is to be maintained in an excellent state of repair, commensurate with the rank of the occupant and the age and historic significance of the building. Accordingly, general and flag officer housing is expensive to maintain; and the age, size, and historic significance of some of these quarters tend to escalate their operations and maintenance costs as the following examples show: Army: The Commandant’s home at Carlisle Barracks was built in 1932. The house is a two-story stone structure with 8,156 square feet of living space and is currently undergoing a major renovation. The residence has an average annual maintenance and repair cost of about $14,000. Navy: Tingey House, the home of the Chief of Naval Operations, is located in the historic Navy Yard, Washington, D.C. Constructed in 1803, the quarters was one of the earliest buildings erected at the Washington Navy Yard. The home is a 2 1/2-story brick structure containing 12,304 square feet of space and has an average annual maintenance and repair cost of about $27,500. Marine Corps: The Home of the Commandants—located within the Marine Corps Barracks at Eighth and I Streets S.E., Washington, D.C. —has been the home of the Marine Corps Commandants since its completion in 1806. The Marine Corps considers the quarters as much a museum as a residence. The home is a three-story structure containing approximately 15,605 square feet of space and has an average annual maintenance and repair cost of about $41,811. Air Force: Carlton House is the home of the Superintendent of the U.S. Air Force Academy and was constructed in the 1930s. The home is a two-story structure with a total of 10,925 square feet of space and has an average annual maintenance and repair cost of about $21,000. All of these homes are used extensively for official entertainment purposes and all but the Commandant’s home at Carlisle Barracks are listed on the National Register of Historic Places. However, the Commandant’s quarters at Carlisle Barracks is considered historic and is eligible for listing on the National Register of Historic Places. The services are to follow DOD Financial Management Regulations and service-specific guidance to prepare budget estimates for major repair projects to general and flag officer quarters. For example, the Navy and Marine Corps justify projects on the basis of mission, life-cycle economics, health and safety, environmental compliance, or quality of life. The services generally hire architectural and engineering firms to inspect and assess the property for projects expected to cost more than $50,000 to determine needed repairs and establish a project cost estimate. Using information developed during the project justification and cost-estimating process, the services are to prepare budget estimates that are submitted to Congress for approval during the annual appropriations cycle. Congress has acted to control spending associated with maintaining these homes by establishing expense thresholds and reporting and notification requirements. For example, the services must include in their annual family housing budget submitted to Congress detailed budget justification material explaining the specific maintenance and repair requirements for those homes expected to exceed an annual $35,000 threshold for maintenance and repair expenses. Section 2601, of Title 10, United States Code authorizes the service Secretaries to accept, hold, administer, and spend any gift of real or personal property made on the condition that it is used for the benefit—or in connection with the establishment, operation, or maintenance—of an organization under the jurisdiction of their departments. Monetary gifts are accepted and deposited in the Treasury in service-designated accounts. In some instances, these funds have been used to supplement appropriations for renovations to general and flag officer quarters. The Military Construction Appropriation Act for fiscal year 2000 directed that funds, appropriated under the act, were to be the exclusive source of funds for repair and maintenance of all military family housing. This excluded the use of gift funds to repair or maintain general and flag officer quarters. A year later, however, Congress expressly authorized the use of gift funds pursuant to Section 2601, of Title 10, United States Code, to help fund the construction, improvement, repair and maintenance of the historic residences at the Marine Corps Barracks at Eighth & I Streets S.E., Washington, D.C. DOD guidance provides the services with a framework for property accountability policies, procedures, and practices. The 1996 Military Housing Privatization Initiative allows private sector financing, ownership, operation, and maintenance of military family housing including, in some cases, housing occupied by general and flag officers. The goal of the initiative is to help the services remove inadequate housing from their family housing inventories and improve service- member morale. Under the program, DOD utilizes various means to encourage private developers to renovate existing housing or construct new housing on or off military installations. Service members, in turn, may use their housing allowance to pay rent and utilities to live in the privatized housing. The privatization firms use the housing allowances to pay for the maintenance and repair of the quarters. As of March 2003, the military services had privatized about 28,000 family housing units, only a small number of which were general or flag officer quarters. The services plan to privatize about 183,000 units, or 72 percent of their total family housing inventory, by fiscal year 2007 and will increasingly include general or flag officer housing. With a few exceptions, the services’ reported actual costs for renovation projects for general and flag officer quarters were generally consistent with or less than the budget estimates provided to Congress. For fiscal years 1999 to 2003, of the 197 projects estimated to cost more than $100,000, 184 (about 93 percent) were under or met their budget estimates; and 13 (about 7 percent) exceeded their budget. While we did not identify any Air Force renovation projects that exceeded their budgets, we did learn of other concerns about costs associated with Air Force plans to replace and repair general officer quarters. See appendix II for further information on this issue. Table 2 shows a comparison of actual costs to budget requests for the 197 renovation projects of more than $100,000 included in our review. Of the 13 over-budget projects, 5 of the 7 Marine Corps projects—4 located at the Marine Corps Barracks at Eighth and I streets, Washington, D.C., and the other located at Kaneohe, Hawaii—exceeded their budgets by more than 10 percent. The other 2 Marine Corps projects exceed their budgets by about 9 percent, and the 6 Navy projects exceed their budgets by less than 2 percent. As seen in table 2, the majority of renovation projects stayed within their budgets. However, some projects cost less than budgeted because the scope of planned work was revised or canceled for a project. For example, the Navy identified instances where the scope of work was reduced or cancelled because a change in occupancy did not occur as scheduled and planned repair work could not be accomplished. Army housing officials cited examples where the scope of renovation projects was reduced because the contractor’s final bid for lead-based paint and asbestos removal exceeded the government’s estimate. The projects’ scope had to be reduced or the budgets would be exceeded. Customer requests for changes and unforeseen repairs were the primary reasons for cost increases to renovation projects. To help minimize costs, housing handbooks provided to general and flag officers occupying government quarters discourage customer-requested changes based on personal preferences and entrust final approval of such changes to the discretion of the installation housing officer or the commanding officer. Although these handbooks seek to limit customer-requested changes, we found numerous approvals for customer-requested changes granted for renovations at the Marine Corps’ Home of the Commandants that contributed to project costs exceeding the budget estimate. Customer driven requests, such as upgraded kitchen and bathroom renovations, or work that was not included in the original scope of work were responsible for about 45 percent of the total cost increase for the 5 Marine Corps projects that exceeded their budgets by more than 10 percent. Table 3 shows the reasons for changes in scope for the projects as well as the amount of cost increase and the percent of the total increase associated with the changes. Six Navy projects exceeded their budgets by less than 2 percent. According to the Navy, the overruns were mostly due to planned work costing more than was originally budgeted—a fairly regular occurrence since budgets are submitted nearly 18 to 24 months before the work is accomplished. However, some of the increases occurred due to such customer requests as additional interior painting and such unforeseen repairs as the need to replace an old, broken boiler heating system with a new forced-air system. Customer-requested changes for the 5 projects that exceeded their budgets by more than 10 percent occurred because the customer, usually the quarters’ occupants, wanted various changes and the housing manager, the commanding officer, and at times the service headquarters acquiesced and approved the changes. For example, at the Marine Corps Barracks Home of the Commandants, where one project exceeded its budget by about 52 percent, customer-requested changes resulted in identifiable cost increases totaling about $338,000. The single largest identifiable increase was due to a customer request for a major kitchen renovation not included in the original scope of work and costing more than $197,256. Major cost drivers for the kitchen renovation included cabinets, granite counter tops, butler pantry, and flooring that the occupant requested. Other customer- requested changes included the renovation of attached guest quarters that included the construction of public, handicap-accessible restrooms and replacement of a newly installed marble tile floor. Cost increases due to customer requests for Quarters 1, 2, and 4 included requests for upgraded kitchen cabinets and counter tops, upgraded bathroom fixtures, and wall- to-wall carpeting. To help minimize costs, the services’ provide handbooks to general and flag officers occupying government quarters that address the propriety of and seek to discourage customer-requested changes based on personal preferences. The installation housing officer or the commanding officer has final approval for such changes. However, we found numerous approvals for customer-requested changes granted for renovations at the Marine Corps’ Home of the Commandants and other quarters at the Marine Corps Barracks that contributed to project costs exceeding the budget estimates. Navy and Army housing officials told us that controlling costs due to customer requests is directly related to a housing officer’s ability to say no to requests that could be perceived as excessive and draw undue public scrutiny upon the service. For the 5 projects that exceeded their budgets by more than 10 percent, cost increases due to such unforeseen repairs as for termite damage or such undetected structural deficiencies as sagging floor supports occurred because these deficiencies or requirements were not identified during initial inspections. For example, at the Home of the Commandants, identifiable changes due to unforeseen repairs resulted in cost increases totaling about $559,416. The single largest cost increase due to unforeseen repairs was for the roof. The initial budget estimate was around $192,189. However, the architectural and engineering firm that did the initial inspection upon which the budget estimate was based did not actually inspect the roof for damage and did not perform destructive testing to look for structural deficiencies. The current roof estimate is around $582,730, an increase of more than $390,541 with about 70 percent of the total increase due to unforeseen deficiencies at the Home of the Commandants. Another unforeseen repair involved replacing a portion of the wood flooring on the first floor because of severe termite damage that was not detected until the old flooring was removed. Again, the deficiency went undetected because destructive testing was not performed. According to service officials, destructive testing is often not accomplished because the quarters’ occupants do not want either the testing to interfere with their entertainment responsibilities or the inconvenience of having their homes in disrepair. Additionally, for the Marine Corps project in Kaneohe, Hawaii, unforeseen historical restoration requirements caused actual renovation costs to exceed the budget estimate by about $47,600 or nearly 25 percent. Marine Corps officials stated that the state historical preservation office wanted the interior walls restored with the same materials used when the house was originally built in 1941. The Marine Corps budget estimate did not include this requirement. The Army, Navy and Marine Corps each received private donations of cash, property, or services to furnish and renovate general and flag officer quarters. While the Army and Navy accepted gift funds to furnish quarters, the Marine Corps accepted and used gift funds to both furnish and help renovate the Home of the Commandants. Although guidance exists to ensure such gifts are properly accepted, held, and used in accordance with the donor’s wishes, neither the Navy nor the Marine Corps followed these procedures for all gifts associated with furnishing the quarters of the Superintendent of the Naval Academy and the renovation of the Home of the Commandants. Section 2601, of Title 10, United States Code, provides gift acceptance authority to each service Secretary to accept, hold, administer, and spend any gift of real or personal property made on the condition that it is used for the benefit—or in connection with the establishment, operation, or maintenance—of an organization under the jurisdiction of their departments. In addition to this legislative authority, the Secretary of the Navy has issued an instruction to help implement and centralize gift acceptance authority. The Marine Corps implements the Secretary’s policy and re-delegates authority to subordinate commands under its jurisdiction. The following table summarizes Navy and Marine Corps procedures for accepting gifts. Although aware of these procedures, Navy and Marine Corps officials acknowledge that in two projects, they did not list nonmonetary gifts on the property accounts and cannot fully account for those gifts made to furnish and renovate two general and flag officer quarters. According to Marine Corps officials, they did not follow the prescribed procedures for accepting and accounting for the estimated $765,500 in nonmonetary gifts (materials such as kitchen cabinets, furniture, wall coverings, draperies, and furniture upholstery) from the Friends of the Home of the Commandants. We contacted the Friends of the Home of the Commandants, which provided us with a listing of donations and their value totaling $765,500 provided to the Marine Corps to help renovate the Home of the Commandants. After some delay, the Marine Corps provided us with a list of nonmonetary gifts totaling $492,413 from the Friends of the Home of the Commandants but had no documentation to support formal acceptance of the gifts and that the gifts were recorded in property records. According to Marine Corps officials, the Friends of the Home of the Commandants provided the remaining $273,087 in nonmonetary gifts directly to the project contractor. However, the Marine Corps also did not document that these gifts were formally accepted and accounted for in property records. Furthermore, Navy and Marine Corps financial records document receipt of about $88,300 donated to the Navy General Gift Fund from the Friends of the Home of the Commandants during fiscal years 1999 through 2003. The Marine Corps, after some delay, produced receipts to account for expenditures using these gift funds to help renovate and furnish the Home of the Commandants. However, the Marine Corps property records do not include the items purchased with the gift funds. These gifts were used to supplement $2,269,000 in appropriations for renovations to the Home of the Commandants. The Navy and Army also accepted nonmonetary or monetary gifts for furnishings for flag and general officer quarters. The gifts were not used for renovations to the quarters. The Navy acknowledges receiving about $59,780 in nonmonetary gifts provided by various donors as furnishings to help decorate the home of the Superintendent of the Naval Academy. However, similar to the Marine Corps, the Navy did not properly accept and account for about $3,970 of the gifts in the property records. The Army properly accepted $50,000 in furnishings from the Army War College Foundation for the home of the Commandant of the Army War College at Carlisle Barracks. DOD and the military services could lose visibility over spending to maintain and repair an increasing number of privatized general and flag officer housing units because there is no consistent DOD-wide policy requiring review of maintenance and repair projects over certain dollar thresholds. By the end of fiscal year 2003, the services had privatized 65 of their 784 general and flag officer quarters and planned to privatize 426, or 54 percent, by the end of fiscal year 2008. DOD has no policy requiring the services to review renovation costs on these homes, such as is done for maintenance and repair projects of more than $35,000 for government- owned quarters. However, the Air Force has developed draft guidance, expected to be issued in May 2004, which will provide more visibility and accountability over spending to operate and maintain privatized general and flag officer housing. The Navy and the Marine Corps have also developed draft guidance that requires headquarters approval for all renovation projects over a certain dollar threshold. No such policy is under development in the Army. Currently, all service headquarters are required to review any renovation project exceeding $35,000 for a government-owned general or flag officer quarters. However, there is no such requirement to review renovations projects involving privatized general and flag officer quarters. Recognizing the need for direction, the Navy, Marine Corps, and Air Force are developing draft guidance and procedures that will provide more visibility over the spending to operate and maintain privatized general and flag officer housing. For example, the Air Force draft guidance applies the same project approvals for renovations to privatized general officer homes as currently exist for government-owned homes, which is all renovation projects over $35,000. Likewise, the Navy and the Marine Corps have developed draft guidance for internally reviewing annual operating budgets for privatized housing that would require approval by Navy or Marine Corps headquarters officials for costs that exceed $50,000 in one year for any house. The Army has no plans to issue additional guidance regarding costs to maintain and repair privatized housing. According to Army officials, annual operating budgets for privatized housing are reviewed by headquarters officials, which they believe will provide adequate visibility over renovations to privatized housing. We agree that reviewing annual budgets provides visibility over renovation costs but question its ability to provide oversight where renovation costs for selected residences are higher than the norm. The services’ procedures to develop cost estimates for renovation to general and flag officer quarters generally produce budget estimates that are consistent with the projects’ actual costs. However, Marine Corps officials approved costly customer-requested changes based on personal preferences notwithstanding guidance in handbooks discouraging the approval of such requests. The Marine Corps failed to follow established guidance and procedures and properly accept and account for gifts, especially nonmonetary gifts, used to help renovate and furnish the Home of the Commandants. Thus, they have no assurance that the nonmonetary gifts remain in their possession. Finally, DOD and the military services could lose visibility over renovations to general and flag office quarters that are privatized. While some services are taking some steps to ensure that renovation projects over certain dollar thresholds are reviewed internally, there is no consistent DOD-wide guidance. We recommend that the Secretary of Defense take the following three actions: Direct the Secretary of the Navy to (1) reemphasize the importance of limiting customer-driven changes to renovation projects for general and flag officer housing and (2) properly account for all gifts accepted and used to help renovate the Home of Commandants of the Marine Corps. Furthermore, we are recommending the Secretary of Defense direct the Under Secretary for Acquisition, Technology, and Logistics to ensure the standardization and periodic review of the expenditure levels for individual privatized units on a programmatic basis, to include general and flag officer quarters, with periodic reports to the office of the Secretary of Defense. In commenting on a draft of this report, DOD concurred with our first and second recommendations and did not concur with the third. With regard to the first two recommendations, DOD indicated that the Navy has agreed to reemphasize the importance of limiting customer-driven changes to renovation projects for general and flag officer housing, properly accounting for all gifts accepted and used to help renovate the Home of Commandants of the Marine Corps, and incorporating accountability measures into revisions of Secretary of the Navy guidance governing the general and flag officer quarters program. However, DOD did not provide a time frame for accomplishing these actions. Our draft report also contained a third recommendation to the Secretary of Defense. He was asked to direct the Under Secretary for Acquisition, Technology, and Logistics to develop departmentwide guidance that provides similar project review and approval for renovation projects to privatized general and flag officer housing as required for government- owned quarters over certain dollar thresholds. However, DOD did not agree with that recommendation expressing the view that extending the same government oversight to privatized housing is contrary to the fundamental tenets of privatization. DOD added that currently projects are monitored to protect government interests, including expenditure levels on individual units, but that the monitoring is not linked to a specific type of housing such as general and flag officer quarters. DOD indicated that although it intends to continue to rely on private sector cost-control mechanisms, it would review standardization of individual unit expenditure levels on a programmatic basis. Such action, to the extent it incorporates general and flag officer housing, meets the intent of our recommendation. Accordingly, we refined our recommendation to better reflect this intent and stay within the parameters of the privatization program. Additionally, the Principal Assistant Deputy Under Secretary commented that our report did not capture the Air Force’s response to issues the DOD Inspector General raised concerning the Air Force’s plans to renovate or replace general officer housing. As we note in appendix II, the Air Force disagreed with the Inspector General’s findings. This disagreement appears largely based on differences between the Air Force and the Inspector General concerning individual renovation projects versus the Air Force’s broader strategic plans for addressing general officer quarters collectively and upgrading the housing to today’s standards, rather than undertaking only immediate repair needs. Other technical comments are incorporated in the report where appropriate. The Principal Assistant Deputy Under Secretary’s comments are included in appendix III of this report. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Army, Navy and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-8412 or Michael Kennedy, Assistant Director, on (202) 512-8333 if you or your staff have any questions. Major contributors to this report were Claudia Dickey, Jane Hunt, Richard Meeks, and Michael Zola. We performed our work at the headquarters offices responsible for general and flag officer housing at the Army, the Navy, the Marine Corps, and the Air Force. At each location, we reviewed applicable policies, procedures, and related documents and interviewed responsible officials for the family-housing and general and flag officer quarters program. We also visited and met with officials from several military installations, including Fort McPherson, Georgia, and Fort McNair, the Washington Navy Yard, the Marine Corps’ Barracks at Eighth and I, and Bolling Air Force Base, all of which are located in Washington, D.C. At each of these locations we toured general and flag officer quarters that were recently renovated, were undergoing renovation, or had not been renovated. We also discussed our review with officials of Housing and Competitive Sourcing, Office of the Secretary of Defense, and with DOD’s Office of the Inspector General. To determine how the actual costs of renovation projects for general and flag officer quarters compared to the service budget estimates provided to Congress, we reviewed all renovation projects of more than $100,000 for fiscal years 1999 through 2003. We obtained the budget estimate for each of these projects from the service budget submissions provided to Congress for the fiscal year. We obtained the reported actual obligations related to the cost for each renovation project from the military services. We compared this information to determine which projects were completed for less than or more than budget. We did not validate service budget and reported obligation data, but we did discuss data reliability with responsible service officials and obtained information from them on steps they have taken to ensure the data’s reliability. Based on this, we believe that the data we used were sufficiently reliable for the purposes of this report. To identify the primary reasons for any cost increases and the services’ procedures to control cost increases, we held discussions with responsible service family-housing, engineering, comptroller, general counsel, and command officials about those renovation projects that exceeded their service budgets. We also reviewed and analyzed documentation that supported the reasons for cost increases. To determine the services’ accountability over gifts provided to help renovate general and flag officer quarters, we reviewed applicable laws and interviewed cognizant officials to identify those general and flag officer quarters with major renovations of more than $100,000 during fiscal years 1999 through 2003 that received gifts used to help with renovations. We identified one Army, one Navy, and one Marine Corps quarters that received gifts either monetary or nonmonetary during the scope of our review. The Army ($50,000 nonmonetary items, such as furniture and drapery material) and Navy ($51,952 nonmonetary items, such as rugs, and $7,830 cash) gifts were for furnishings for the quarters. The Marine Corps used gifts—monetary and nonmonetary—to help with the renovation of a general officer quarters. To determine how the services accounted for monetary and nonmonetary gifts, we reviewed DOD and service gift fund and gift acceptance regulations and guidance; interviewed cognizant service officials; and reviewed administrative, contract, funding and accounting documents. Since only the Marine Corps accepted gifts intended to help with the renovation of a general officer quarters, we focused additional attention on determining how the Marine Corps accounted for monetary and nonmonetary gifts provided to help renovate the Home of the Commandants. We compared the Marine Corps listing of gifts received with a listing of gifts provided by the Friends of the Home of the Commandants—the primary contributor of gifts for the Home of the Commandants during fiscal years 1999 through 2003. We also asked to review property records that showed the Marine Corps’ receipt of these gifts. The Marine Corps was unable to provide receipts or produce property records for the nonmonetary gifts. Further, we compared the Navy Comptroller’s reported balance for the Navy General Gift Fund for the Marine Corps Barracks at Eighth and I, with the Marine Corps Barracks-reported expenditures and supporting documentation. The Marine Corps could not provide documentation to support all Navy General Gift fund expenditures as reported by the Navy Comptroller. As a result, we were unable to verify the total amount of gifts, monetary and nonmonetary, received by the Marine Corps to help renovate the Home of the Commandants. However, where we could, we reconstructed the flow of gifts to the Marine Corps and reconciled individual gifts that we could identify. To assess the extent to which DOD and the services have issued guidance to provide visibility and control over costs associated with renovation projects for privatized general and flag officer quarters, we interviewed Office of the Secretary of Defense and service officials responsible for family housing and privatization. Where available, we also obtained and reviewed service draft guidance regarding review of renovations to privatized housing. While we did not identify any Air Force renovation projects that exceeded their budgets, we did learn of other concerns about costs associated with Air Force plans to replace and repair general officer quarters. The DOD Inspector General recently identified issues concerning the Air Force’s plans to renovate or replace general officer quarters. The Inspector General questioned $21.3 million of the $73.7 million in Air Force General Officer Quarters Master Plan requirements because its analysis showed the assessment methodology did not always reflect existing conditions. The Air Force issued a master plan in August 2002 to identify whole house investment requirements for their general officer quarters. Part of the GOQ master plan included a prioritized operations and maintenance plan for each GOQ to manage and minimize maintenance and repair expenditures that may become necessary prior to the execution of a whole-house improvement project. For all 267 general officer quarters, the Air Force developed an individual facility profile. The profile consists of a description of the home, a detailed analysis of existing conditions and functional deficiencies, recommendations for maintenance and repair, house plan suitability recommendations to correct functional deficiencies and bring the unit up to Air Force standards, and the estimated cost to perform a whole-house improvement project. Incorporated within the profile is a condition assessment score for each of the homes major systems and subsystems and house plan suitability scores to arrive at an overall composite score for each general officer quarters. The assessments indicated that 219 or 82 percent of their 267 homes required whole house improvement projects to resolve deficiencies. Based on this information, the Air Force developed a plan to renovate 203 homes at a cost of about $52.3 million and replace 64 others at a cost of about $21.4 million, an overall plan cost of about $73.7 million. In a January 23, 2004, letter to the Air Force Deputy Chief of Staff for Installations and Logistics, the DOD Inspector General reported that the Air Force’s estimate for renovating and replacing general officer quarters may be overstated by $21.3 million because the profile recommendations were not consistent with Air Force assessments of existing conditions. According to the Inspector General, some of the homes’ systems such as the roof, structural components, or the house plan met standards or needed only minor maintenance and repair, but the Air Force recommended them for replacement, relocation, or reconfiguration. For example, one house at Bolling Air Force Base, Washington, D.C. had a system with a condition rating of “good,” “indicating a fully serviceable condition which met standards,” but the Air Force master plan included a recommendation to reconfigure rooms and interior walls in the home at a cost of $190,000. The Inspector General concluded that the Air Force’s condition assessment matrix tool design might have contributed to the inconsistencies between existing conditions and maintenance and repair recommendations. The Inspector General also reported that the inconsistency of Air Force recommendations with existing condition assessments demonstrated that the Air Force did not always consider where the home was in its life cycle. Additionally, DOD Inspector General officials told us separately of their concerns about Air Force plans to replace two homes at Bolling Air Force Base, which under original Air Force plans were to be renovated. The original fiscal year 2002 project recommended renovating the homes for an estimated $345,000 per home. However, because the Air Force designated these two homes as Special Command Position quarters, and these homes have additional space requirements for an enlisted aide and for added entertainment requirements, the Air Force now estimates the renovation will cost an estimated $555,000 per home—an increase of $210,000—that is more than 70 percent of the estimated cost to replace each house. As a result, the Air Force recommended that each house be replaced rather than renovated as originally planned. Air Force officials stated that the $210,000 increase in estimated renovation costs was due to escalation from the 2002 project to the present as well as about $110,000 for costs associated with structural work to modernize and expand the kitchen, provide an enlisted aide’s office area, and correct functional deficiencies in the dining room to provide a more usable space; about $30,000 for a two car garage; and the remaining approximately $70,000 to address various unforeseen environmental remediation, force protection, and basement waterproofing requirements, as well as other electrical, plumbing, floor repair, cabinets, countertops and appliances not included in the original estimate. Since the homes are eligible for the National Historic Register, the Air Force must seek approval for their plans with the District of Columbia Historic Preservation Office. According to the Air Force, it initiated contact with the office in October 2002, and is currently proceeding with the regulatory process to obtain approval for their plans. The District of Columbia Historic Preservation Office has not yet approved the Air Force’s plans. The Air Force disagreed with the Inspector General findings. | Recent cost increases in renovation projects to general and flag officer quarters raised questions about the services' management of the programs. GAO was asked to determine (1) how actual costs of renovation projects for general and flag officer housing compare to service budget estimates provided to Congress and (2) the primary reasons for any increases and the services' procedures to control cost increases. Additionally, GAO is presenting observations about the services' accountability over gifts provided to help renovate some general and flag officer quarters and the extent to which Department of Defense (DOD) guidance provides visibility and control over costs associated with renovation projects for privatized general and flag officer quarters. With few exceptions, the services' reported costs for renovation projects for general and flag officer quarters were generally consistent with budget estimates provided to Congress. For fiscal years 1999 to 2003, GAO found that 184, or 93 percent, of the 197 renovation projects over $100,000 cost less than or the same as budget estimates. While the remaining 13 projects--6 Navy and 7 Marine Corps--exceeded cost estimates, 5 Marine Corps projects exceeded their budgets by more than 10 percent. Customer-requested changes and unforeseen repairs were the main reasons for cost increases to renovation projects. For 5 of the 7 projects that exceeded their budgets by over 10 percent, about 45 percent of the increased costs was for customer-driven changes, 53 percent for unforeseen repairs, and 2 percent could not be determined. Though the services have guidance to limit customer-requested changes, the Marine Corps approved many such changes that contributed to project costs exceeding budgets. Customer requests included upgraded kitchen and bathroom renovations or initially unplanned work. Unforeseen repairs, such as for termite damage or unexpected historic preservation requirements, occurred because problems were not identified in the inspections on which the estimates were based. Military services did not properly account for gifts used for general officer quarters in two instances, one involving renovation costs. In that instance, the Marine Corps did not comply with existing regulations to properly accept and account for all gifts used to renovate the Home of the Commandants. The Friends of the Home of the Commandants told GAO it provided about $765,500 in nonmonetary materials and services (e.g., furnishings and construction labor). However, the Marine Corps could list nonmonetary gifts totaling only $492,413 because it did not follow specified gift acceptance and accounting procedures. Navy General Gift Fund records show receipt of an additional $88,300 in monetary gifts from the Friends of the Home of the Commandants. The Marine Corps has receipts for monetary expenditures, but not property records for items purchased with the gift funds. The Navy and Army also accepted gifts to furnish general and flag officer homes. Of those, the Navy did not properly accept and account for about $3,970 in nonmonetary gifts. DOD and the military services could lose visibility over housing renovation costs for privatized general and flag officer homes. DOD does not require review of renovation costs for these quarters, such as costs over $35,000, as required for government-owned quarters. The Navy, Marine Corps, and Air Force are developing guidance to increase visibility and accountability over the spending for these quarters, but the draft guidance is not consistent. Although the services have privatized only 65 of their 784 general and flag officer quarters, they plan to privatize 426 or 54 percent by fiscal year 2008. |
FDA’s overall mission is to protect the public from selected domestic or imported foods, drugs, cosmetics, biological products, and medical devices and from products that make fraudulent or misleading claims that might threaten public health and safety. On matters relating to its import operations, FDA’s Office of Regulatory Affairs provides guidance and systems support, and performs planning, budgeting, and reporting activities for 6 regional offices, 21 district offices, and about 130 resident inspection posts. Imported products can enter the United States at seaports, airports, courier hubs, and border crossings. The volume of import entries subject to FDA regulations has been increasing over the last 20 years from about 590,000 entries in 1975 to about 1.6 million entries currently, and is expected to reach 2 million entries by 2000. Products imported into the United States must be cleared first by Customs, whose responsibilities include assessing and collecting revenues from imports, enforcing customs and related laws, and assisting in the administration and enforcement of other provisions of laws and regulations on behalf of 60 federal agencies. Import brokers act as agents for importers and process the information required to bring products into the United States. Brokers can electronically transmit data on their products to Customs through an automated interface with Customs’ Automated Commercial System (ACS). If Customs determines that a product requires FDA approval before being released into the domestic market, such as for regulated food and drugs, the broker is to forward entry information to FDA for review. Under FDA’s manual entry and review process, brokers must submit entry documents (an FDA-701, invoice, and associated certifications) to FDA for each shipment. Using these documents, FDA inspectors at the port of entry decide whether to release the shipment for entry, examine the shipment by inspecting it there, perform paper or laboratory examination of it for possible refusal due to violations of laws and regulations, or detain it until the broker furnishes additional information. Entry documents can range from a few pages to as many as 40 pages depending on the type and volume of goods in a shipment. The time interval between when the broker submits the documents to FDA and when the broker receives a release or examination decision from FDA averages 2 days. As the volume of imports continued to grow, FDA recognized a need to automate and expedite its entry and review process. Also, FDA envisioned that an automated system would provide a method to capture and share historical data to bring uniformity to its enforcement decisions for detecting and preventing “port shopping” by importers. FDA found that because of its heavy workload or less interest in particular products at some ports, some importers tended to use the port of entry that provided them with the best opportunity for receiving FDA approval. In 1987, the FDA Commissioner formed a task force to develop a new automated system as recommended in a contractor-prepared feasibility study. This system, now known as OASIS, was intended to increase the efficiency and effectiveness of FDA’s program for monitoring imported products. In general, OASIS was expected to (1) increase the productivity of investigations personnel through automated interfaces with the laboratories, brokers and/or Customs, (2) improve screening of imports by providing suggestions for actions likely to result in discovery of violations, (3) provide faster turnaround for processing of importer’s entries and faster and more consistent responses, (4) provide national and district uniformity in processing of entries, and (5) maintain a base of information for generation of reports. OASIS was initially planned to be fully implemented in September 1989. We interviewed FDA and Customs officials in the Washington, D.C., area, and Seattle, Washington, to determine the operational objectives and timeframe for implementing OASIS. We also reviewed systems documentation provided by FDA, such as the system design, functional requirements, capacity analysis, risk assessment, regional contingency plan, implementation schedules, software support contracts, task orders, interagency agreement between FDA and Customs, and security and information resources management policies and procedures. We assessed FDA’s efforts to design, develop, and implement OASIS against GAO’s executive guide on the best practices of leading private and public organizations for strategic information management, and federal guidelines, such as the Federal Information Processing Standards Publications. In addition, we reviewed the 1994 joint self-assessment report on OASIS, the contractor’s cost-benefit analysis, and the System Design Review Committee’s report. To monitor the implementation of OASIS, we visited and interviewed officials in FDA district offices and ports of entry in Seattle (system pilot location); Miami, Florida; Buffalo and New York, New York; and Detroit, Michigan. We also conducted telephone interviews with FDA officials in several other district offices. Interviews with FDA import managers and inspectors at these sites provided us with observations and examples of entries processed both manually and electronically at major FDA air, sea, and border ports. We also interviewed Customs officials and import brokers at the sites visited to obtain their perspectives on how the system has improved the import process. Further, we interviewed officials at HHS, who were involved with the self-assessment and system design reviews. We also interviewed one of the prior software development contractors and the current contractor regarding their roles and responsibilities for the OASIS project. We performed our work from April 1994 through June 1995, in accordance with generally accepted government auditing standards. We requested official comments on a draft of this report from the Secretary of Health and Human Services on August 10, 1995. As of September 18, 1995, we had not received any comments to include in the final version of this report. The development of OASIS is taking considerably longer than FDA officials expected. As shown in figure 1, after 8 years and three software development contractors, FDA still does not have a fully functional automated import system. The original design, which was called the Import Support and Information System (ISIS), was for a large, nationwide, on-line, real-time, distributed FDA system. This system was modified following its 1991 pilot test and FDA’s agreement with Customs to include an automated interface with Customs’ ACS. As shown in figure 1, the modified system, known as OASIS, was pilot tested in Seattle, Washington, in 1992 and expanded to Portland, Oregon, and Blaine, Washington, in 1993. OASIS, as implemented in the Seattle District locations, provides FDA inspectors the ability to (1) receive import entry data electronically from import brokers through interface with ACS, (2) receive results of preliminary processing against FDA’s selectivity criteria screening file, which is installed on ACS, that the shipment “May Proceed,” must be “Detained” for sampling, or must be held for “FDA Review,” (3) be alerted to potential problem areas with each line item of an entry, make follow-up screening decisions, and transmit these electronically to the broker through interface with ACS, (4) track actions taken and maintain historical data on all electronic import entries, and (5) eliminate many of the paper transactions among FDA, Customs, and import brokers. In addition, import brokers who interface with ACS receive preliminary and subsequent screening decisions relating to their electronic entries simultaneously with FDA. However, software design problems experienced at the pilot locations made OASIS difficult to use. Such problems included slow response times when receiving and printing electronic data, moving from computer screen to computer screen, or going in and out of other systems while processing entries. These OASIS development problems prompted FDA to assemble a team of information resources management (IRM) representatives from HHS, PHS, and within FDA to pilot a self-assessment tool to analyze risks associated with the development of OASIS. In June 1994, as a result of the exit briefing by the self-assessment team on its results and pending actions under way by FDA to replace the expiring OASIS contract, FDA’s Director of the Office of Information Resources Management called for the termination of both the development and deployment of all but the front-end portion of OASIS known as the electronic entry processing system or EEPS. FDA decided to maintain and refine OASIS in the Seattle District. In its July 1994 report, the self-assessment team concluded that the OASIS project was at high risk for system failure due to the lack of senior-level management involvement, project planning, and basic development processes as well as system design flaws, an insufficient budget, and a skeleton staff lacking adequate system design and implementation expertise. In contrast to the OASIS functions described previously, EEPS allows FDA inspectors to receive the broker’s entry data from ACS, but only allows inspectors and brokers to receive the preliminary admissibility messages of either “May Proceed” or “FDA Review” for the entire entry. It is not capable of processing or transmitting any follow-up line-item decisions from FDA to the brokers. EEPS was deployed to 114 ports between March 1994 and June 1995, with 103 additional ports expected to be automated by the end of 1995. According to import brokers and FDA inspectors we interviewed, even EEPS’ limited capability provides them quicker notifications as to the admissibility of imported shipments and reduces the amount of paperwork required from brokers. For “May Proceed” decisions, paper entry documentation is generally eliminated. For example, during the month of June 1995, FDA reported that of 2,520 brokers who interfaced with Customs’ ACS in the Seattle District and EEPS ports, 1,585, or 63 percent, used electronic filing for 178,412 FDA entries, and that 78 percent of the 1,585 electronic filers were not required to submit entry documentation for “May Proceed” entries. Table 1 below compares the traditional manual entry process to EEPS. As recommended in 1994 by the self-assessment team, HHS, PHS, and FDA formed a systems design review committee to determine if (1) the OASIS design adequately meets the user requirements, (2) FDA computer hardware, or platform, is adequate for the system, (3) real-time access is necessary, and (4) telecommunications are adequate. The committee’s June 1995 report addressed the first three items. FDA’s telecommunications management branch is conducting an agencywide study on the telecommunications and network capacity and capabilities needed. The results of this study are expected in February 1996. The committee’s June 1995 report stated that (1) the OASIS system design contains significant deficiencies, (2) the adequacy of the agency platform cannot be determined because certain stress and system load tests have not been performed or documented, and (3) real-time access is not necessary. Although further development and deployment of the OASIS system is on hold, completion of a successful import system remains a major information resource management goal for FDA. We previously reportedon FDA’s need to address systems development problems and implementation delays, and our current review identified many of the same problems reported by the self-assessment team. In addition, we found that the OASIS project lacked necessary cost and performance information and did not consider some proven best practices of leading organizations that help ensure successful systems development. These problems must be resolved if FDA is to complete its automation of import operations. Beginning in late September 1994 with the award of a new agencywide strategic information systems support contract, FDA began to address some of the systems development process problems identified but continued to lack effective senior-level management and direction, as well as a systems project management team with information technology expertise. In addition, FDA has made little progress in implementing basic systems development procedures, including conducting user acceptance testing and a risk assessment. Recent developments include the completion of a system design review which concluded that OASIS was not ready for national implementation and recommended an immediate reengineering effort. FDA top management did not adequately oversee the OASIS project and did not provide clear direction and appropriate resources needed to support the project. We found that this situation was largely due to an IRM structure that did not clearly define control and lines of accountability for the OASIS project. In addition, we found that the OASIS project was directed by managers who lacked the systems development training and expertise to successfully design, develop, deploy, and maintain an information system. The Deputy Commissioner for the Office of Management and Systems is both the chief financial officer and the senior IRM official for FDA. IRM activities on the OASIS project are the responsibility of the Office of Information Resources Management (OIRM) under the Deputy Commissioner for the Office of Management and Systems and the Office of Regulatory Affairs (ORA) under the Deputy Commissioner for the Office of Operations. As shown in figure 2, many FDA offices and divisions have some involvement with the OASIS project. The responsibilities of OIRM include (1) ensuring that the agency’s 5-year strategic plan for acquisition and development of information resources is prepared and implemented, (2) ensuring that the most cost-effective approach is applied when acquiring information technology, and (3) approving acquisitions and ensuring that IRM goals and strategies are achieved. Since the OASIS project began, its planning, design, development, implementation, and contractor acquisition and interaction resided primarily within the divisions of import operations and information systems in ORA. However, ORA’s requests for procurement authority for OASIS were and continue to be reviewed and approved by OIRM. The Deputy Commissioner for Management and Systems told us that since the award of the current strategic information systems contract in September 1994, the Associate Commissioner for OIRM has been charged with providing ORA with continuous technical consultation and scrutiny of all contractor task orders and deliverables prepared under ORA’s direction. The OASIS project manager is the director of the strategic initiatives staff, which is part of ORA, and does not report directly to OIRM officials. In addition, some oversight has been provided at the department level. In accordance with the Paperwork Reduction Act, as amended, the HHS Secretary designated a senior official who is responsible for ensuring agency compliance with and prompt, efficient, and effective implementation of the information policies and IRM responsibilities under the Act. The designated senior official at HHS is the Assistant Secretary for Management and Budget, who has delegated certain authorities—such as for procurement—to agencies within the Public Health Service, including FDA. The HHS Deputy Assistant Secretary for IRM, who reports directly to the designated senior official, is responsible for management and operation of the department’s IRM program. It is at this level that HHS has provided FDA with assistance on both the self-assessment team and system design review committee. The joint FDA/PHS/HHS self-assessment report indicated that FDA senior-level management needed to be closely involved with OASIS due to the visibility of the system and the troubled system development history. The report stated that the Commissioner or other top FDA officials did not receive regularly scheduled progress reports on the project. We found several memoranda dating back to 1989 in which OIRM raised concerns to ORA about the cost, complexity, and lack of well-defined requirements, alternatives, and planning regarding OASIS. Nonetheless, the project continued under the direction of ORA until June 1994, when the OIRM director called for the termination of further development based upon the results of the self-assessment team. It is critical that senior-level oversight of this automation effort be established to ensure that information technology is acquired, used, and managed to improve the performance of FDA’s public health and safety mission, and that responsibility and accountability are improved. As discussed in GAO’s May 1994 publication on the best practices of leading private and public organizations for strategic information management, these organizations have found that without senior executives recognizing the value of improving information management, meaningful change is slow and sometimes nearly impossible. As discussed above, ORA administered the day-to-day management of the OASIS project. We found, however, that ORA did not have the systems development expertise in-house to perform these functions. Our review of the experience and qualification statements of OASIS project management showed that the ORA Deputy Associate Commissioner—the senior project official, the project manager, and the project officer did not have any systems development training or experience. The OASIS project manager concurred with our finding in a February 1995 memorandum, which stated that ORA did not have employees with adequate knowledge and experience in life-cycle methodology and related skills, all of which were important to a system of OASIS’ complexity. The memorandum stated that ORA planned to use its current software development and support contractor to address this deficiency in systems development and hardware acquisition expertise. During our review, the self-assessment team recommended in its July 1994 report that ORA request and accept assistance from another FDA component, PHS, or HHS to address deficiencies in staff knowledge. As stated previously, ORA receives oversight from OIRM for task order review and approval, but not day-to-day assistance from this or other sources as recommended. ORA still does not have someone with the system development expertise to oversee the OASIS project and monitor the contractor’s work. A best practice that can lead to improved mission performance is to ensure that skills and knowledge of line and information management professionals are upgraded. Also useful is establishing customer/supplier relationships internally and defining roles between line managers and information management support professionals to maximize management processes. Lastly, the chance of a breakdown between the agency and contractors is great when the agency does not have information management professionals with the needed expertise to assist line management in evaluating and supervising contractor performance. We found that FDA has not been effective in controlling costs or monitoring the progress of OASIS. FDA officials informed us that they did not have a cost accounting system that would enable them to clearly identify the costs of the OASIS project. They said that some of this cost was commingled with other information systems projects. For example, despite our repeated attempts to obtain the systems life-cycle cost for OASIS from its inception through the current fiscal year, FDA did not provide us with cost data until July 1995. This information was prepared by FDA’s contractor and submitted in June 1995, as part of a cost-benefit analysis requested by FDA. According to information contained in the contractor’s report, the OASIS systems development costs were estimated to be $13.8 million from fiscal year 1987 through April 1995. We did not independently verify these estimates. In addition, the agency did not properly account for or match OASIS costs with outcomes to determine if OASIS would meet FDA’s needs within its budget allocation. Accurate accounting of all project costs will be crucial since FDA is supportive of legislation that would allow the agency to collect user fees for imports processed through the automated system to offset the costs of developing, deploying and supporting the system. Also, the importance of an import screening system to FDA’s operations and the import community warrants the maintenance of reliable cost and performance information to keep congressional appropriations and oversight committees informed of the status of any systems development effort. ORA officials we interviewed told us that they did not establish any baseline measures to assess current and expected OASIS operational and technical performance. As discussed in GAO’s best practices publication, standard performance measurement practices focus on benefits, costs, and risks and, in most cases, include program outcomes, resource consumption, and elapsed time (cycle time) of specific work processes, activities, or transactions. Performance measures act as a common focus, allowing management to target problem areas, highlight successes, and generally increase the rate of performance improvement through enhanced learning. Such measures would allow top management to assess and manage the risk associated with its import automation effort, and to control the trade-offs between continued funding of existing operations and developing new performance capabilities. We found that FDA did not follow sound systems development procedures, such as those outlined in federal guidelines, when developing OASIS because its project management team lacked expertise and training in systems development. Specifically, FDA did not (1) validate its criteria for electronically screening import entries, (2) conduct user acceptance testing, (3) conduct a risk assessment or prepare a security plan to address contingencies or backup procedures to be used in the event of disasters or threats to FDA’s computer facilities, equipment, and data, and (4) conduct a cost-benefit analysis. Many of these problems were brought to FDA’s attention as early as 1988. The following systems development problems must be resolved if FDA is to avoid continued criticism of its attempts to complete an automated import system. No FDA validation of screening criteria. FDA had not validated the import admissibility screening criteria that reside in Customs’ ACS. Validation is essential to ensure that import entries are processed accurately and that potentially unsafe products are properly identified for “FDA Review.” OASIS project officials in ORA said that they did not have access to the criteria in ACS and could only validate information contained in the ACS-generated error reports. Moreover, these officials stated that they did not know if Customs corrected all the errors they identified. The joint self-assessment report also concluded that FDA did not have an adequate verification and validation process for its software and documentation. Did not conduct user acceptance testing. The self-assessment report stated that FDA did not have written acceptance criteria or test plans. For example, FDA did not conduct nor participate with Customs in user acceptance testing prior or subsequent to implementing the ACS interface. ORA’s Deputy Associate Commissioner told us that it relied on and trusted Customs to ensure that the screening criteria database was functioning as intended. Security plan not developed. Until recently, FDA had not conducted a risk assessment or developed a disaster recovery plan for EEPS, as required by federal guidelines. In 1992, FDA declared OASIS a “record system” subject to the requirements of the Privacy Act of 1974. Thereafter, FDA considered OASIS a critical-sensitive system. Also, the Computer Security Act of 1987 requires agencies to establish security plans and perform vulnerability assessments for all computer systems that contain sensitive information. In February 1995, FDA issued a risk assessment of EEPS at FDA headquarters and a contingency plan to address backup procedures for the Pacific Region, which runs the regional computer facility in the Seattle District office. However, we found that the risk assessment was incomplete and did not address major portions of EEPS. In addition, the contingency plan was not viable because FDA moved the OASIS processing function from Seattle to the larger processing facility in its headquarters in Rockville, Maryland. FDA does not have a contingency plan for ORA’s headquarters computer center. However, it plans to obtain a risk assessment of ORA’s information systems and contents through an interagency agreement with the Department of Transportation. As of May 1995, FDA could not tell us when a risk assessment and contingency plan would be performed at FDA headquarters to address security concerns for this mission-critical system. No cost-benefit analysis conducted. At the beginning of our review, we learned that no one had performed a cost-benefit analysis for the OASIS project. This deficiency was also later reported by the self-assessment team. A cost-benefit analysis describes the development and operational costs of each alternative, and of nonrecurring (improved system operations and resource utilization) and recurring (operations and maintenance, including personnel) benefits that could be attained through the development of each proposed alternative. Such an analysis is useful to managers, users, and designers for analyzing alternative systems and will be essential to any decisions for further development of an automated import system. ORA officials told us that they did not ask for such an analysis in the past. In February 1995, the current contractor was tasked with conducting a cost-benefit analysis, which was completed in June 1995. However, FDA did not request that the contractor perform an alternatives analysis. The current effort was limited to an analysis of OASIS’ historical costs from fiscal year 1987 through April 1995, which were estimated to be $13.8 million as well as projected costs from May 1995 through fiscal year 2001, which were estimated to be $26.2 million. The contractor also analyzed the costs and benefits of automation as compared to the current manual process. In June 1995, the System Design Review Committee issued its report on OASIS which concluded that the system is not ready for national implementation because of significant system deficiencies, including inconsistent user interface design and the lack of automated configuration management and version control. Consequently, the committee recommended that a reengineering effort begin immediately to design a system that would incorporate all customers’ needs, take advantage of modern technology and the strategic direction in which FDA is heading, and position FDA for the future. In a July 10, 1995, meeting with FDA’s Deputy Commissioner for the Office of Management and Systems, we were told that FDA will not implement OASIS nationwide and will begin a reengineering effort. In addition, FDA agreed to the recommendations of the committee as stated in a July 12, 1995, correspondence from the Deputy Commissioner to ORA officials. However, the details of the reengineering effort have not yet been documented so it is not clear who will lead this effort, what will it involve, and how long will it take. Reengineering is a formidable undertaking that requires an organization’s managers and employees to change the way they think and work. For example, after senior management recognizes the need for change and commits to reengineering, it then must direct the effort. Existing business processes should be described and analyzed, and measurable improvement goals should be set. In addition, senior management must also support the reengineering effort by identifying training needs and determining whether outside expertise is necessary. New business processes should then be designed and the organizational culture, structure, roles, and responsibilities should be changed to support these new processes. Finally, new business processes should be implemented by acquiring and installing new technology or redesigning existing technology to support the new processes. FDA, though, has not yet clearly defined its reengineering effort and how it plans to link this effort to its information technology initiatives. This is critical if FDA is to achieve dramatic changes in overall performance and customer satisfaction. A thorough understanding of the factors that led to FDA’s failure over the past 8 years to develop and implement an import system to meet its mission critical needs is crucial to help ensure that similar problems and obstacles are avoided in the future. As FDA plans its reengineering effort, it is presented with an opportunity to identify and correct its long-standing systems development problems. Because these problems can be attributed to a lack of top management oversight, systems expertise, and reliable cost and performance information, continued attention by FDA and HHS is vital to the success of this automation effort. It is crucial that FDA follow sound system development procedures, in conjunction with a well-defined reengineering strategy, if it is to successfully implement an import system and achieve its public health and safety mission. We recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Management and Budget and the Commissioner of the Food and Drug Administration to ensure that continuous top management oversight and systems expertise are provided to FDA as it proceeds with its import automation effort; FDA develops and maintains reliable cost and performance information; FDA follows sound systems development practices, including validating systems software, conducting user acceptance testing, developing a security plan, and conducting a cost-benefit analysis that includes an assessment of alternative systems. We also recommend that the Secretary direct the Assistant Secretary and the Commissioner to clearly define how FDA plans to reengineer its import operations. At a minimum, FDA should (1) identify and analyze existing business processes and work flows, (2) obtain the necessary technical assistance and training to support its reengineering efforts, and (3) determine new information needs, application system requirements, and technology requirements necessary to support the new business processes. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the date of this letter. We will then send copies of this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, the Director of the Office of Management and Budget, and other interested parties. Copies also will be made available to others upon request. This report was prepared under the direction of Patricia T. Taylor, Associate Director. You or your staff can reach me at (202) 512-6252, or Ms. Taylor at (202) 512-5539, if there are any questions on the report. Other major contributors are listed in appendix I. Susan T. Chin, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Food and Drug Administration's (FDA) progress in implementing its Operational and Administrative System for Import Support (OASIS), focusing on systems development areas that need improvement. GAO found that: (1) although some improvements have been made to import operations, FDA has not completed OASIS after 8 years and about $14 million in system development costs, mainly due to inadequate management oversight; (2) in 1994, FDA determined that OASIS was at a high risk for failure and it should suspend development until it completed a comprehensive system review; (3) FDA has taken an inadequate approach in developing OASIS, resulting in the potential for unsafe products entering the country; (4) FDA completed the comprehensive review of OASIS in June 1995 and determined that OASIS was not ready for national implementation and recommended an immediate reengineering effort; and (5) FDA success in improving OASIS depends on better planning and top management involvement in system design, development, and deployment. |
While the U.S. government supports a wide variety of programs and activities for global food security, it lacks comprehensive data on funding. We found that it is difficult to readily determine the full extent of such programs and activities and to estimate precisely the total amount of funding that the U.S. government as a whole allocates to global food security. In response to our data collection instrument to the 10 agencies, 7 agencies reported providing monetary assistance for global food security programs and activities in fiscal year 2008, based on the working definition we developed for this purpose with agency input. Figure 1 summarizes the agencies’ responses on the types of global food security programs and activities and table 1 summarizes the funding levels. (The agencies are listed in order from highest to lowest amount of funding provided.) USAID and USDA reported providing the broadest array of global food security programs and activities. USAID, MCC, Treasury (through its participation in multilateral development institutions), USDA, and State provide the highest levels of funding to address food insecurity in developing countries. In addition, USTDA and DOD provide some food security-related assistance. These 7 agencies reported directing at least $5 billion in fiscal year 2008 to global food security, with food aid accounting for about half of this funding. However, the actual total level of funding is likely greater. The agencies did not provide us with comprehensive funding data due to two key factors. First, a commonly accepted governmentwide operational definition of what constitutes global food security programs and activities has not been developed. An operational definition accepted by all U.S. agencies would enable them to apply it at the program level for planning and budgeting purposes. The agencies also lack reporting requirements to routinely capture data on all relevant funds. Second, some agencies’ management systems are inadequate for tracking and reporting food security funding data comprehensively and consistently. Most notably, USAID and State—which both use the Foreign Assistance Coordination and Tracking System (FACTS) database for tracking foreign assistance— failed to include a very large amount of food aid funding in that database. In its initial response to our instrument, USAID, using FACTS, reported that in fiscal year 2008 the agency’s planned appropriations for global food security included about $860 million for Food for Peace Title II emergency food aid. However, we noticed a very large discrepancy between the FACTS-generated $860 million and two other sources of information on emergency food aid funding: (1) the $1.7 billion that USAID allocated to emergency food aid from the congressional appropriations for Title II food aid for fiscal year 2008, and (2) about $2 billion in emergency food aid funding reported by USAID in its International Food Assistance Report for fiscal year 2008. USAID officials reported that USAID has checks in place to ensure the accuracy of the data entered by its overseas missions and most headquarters bureaus. However, the magnitude of the discrepancy for emergency food aid, which is USAID’s global food security program with the highest funding level, raises questions about the data management and verification procedures in FACTS, particularly with regard to the Food for Peace program. While the administration is making progress toward finalizing a governmentwide global food security strategy through improved interagency coordination at the headquarters level, its efforts are vulnerable to weaknesses in data and risks associated with the host country-led approach called for in the strategy under development. Two interagency processes established in April 2009—the NSC Interagency Policy Committee on Agriculture and Food Security and the GHFSI working team—are improving headquarters coordination among numerous agencies, as shown in figure 2. The strategy under development is embodied in the Consultation Document issued in September 2009, which is being expanded and as of February 2010 was expected to be released shortly, along with an implementation document and a results framework that will include a plan for monitoring and evaluation. In the fiscal year 2011 Congressional Budget Justification for GHFSI, the administration has identified a group of 20 countries for GHFSI assistance, including 12 countries in sub- Saharan Africa, 4 in Asia, and 4 in the Western Hemisphere. However, the administration’s efforts are vulnerable to weaknesses in funding data, and the host country-led approach, although promising, poses some risks. Currently, no single information database compiles comprehensive data on the entire range of global food security programs and activities across the U.S. government. The lack of comprehensive data on current programs and funding levels may impair the success of the new strategy because it deprives decision makers of information on all available resources, actual costs, and a firm baseline against which to plan. Furthermore, the host country-led approach has three key vulnerabilities, as follows: First, the weak capacity of host governments raises questions regarding their ability to absorb significant increases in donor funding for agriculture and food security and to sustain donor-funded projects on their own over time. For example, multilateral development banks have reported relatively low sustainability ratings for agriculture-related projects in the past. In a 2007 review of World Bank assistance to the agricultural sector in Africa, the World Bank Independent Evaluation Group reported that only 40 percent of the bank’s agriculture-related projects in sub-Saharan Africa had been sustainable. Similarly, an annual report issued by the International Fund for Agricultural Development’s independent Office of Evaluation on the results and impact of the fund’s operations between 2002 and 2006 rated only 45 percent of its agricultural development projects satisfactory for sustainability. Second, the shortage of expertise in agriculture and food security at relevant U.S. agencies can constrain efforts to help strengthen host government capacity, as well as review host government efforts and guide in-country activities. For example, the Chicago Council on Global Affairs noted that whereas USAID previously had a significant in-house staff capacity in agriculture, it has lost that capacity over the years and is only now beginning to restore it. The loss has been attributed to the overall declining trend in U.S. assistance for agriculture since the 1990s. In 2008 three former USAID administrators reported that “the agency now has only six engineers and 16 agriculture experts.” According to USAID, a recent analysis of direct hire staff shows that the agency has since increased the number of its staff with technical expertise in agriculture and food security to 79. A USAID official told us that the agency’s current workforce plan calls for adding 95 to 114 new Foreign Service officers with technical expertise in agriculture by the end of fiscal year 2012. Third, policy differences between the United States and host governments with regard to agricultural development and food security may complicate efforts to align U.S. assistance with host government strategies. For example, Malawi’s strategy of providing subsidized agricultural inputs to farmers runs counter to the U.S. approach of encouraging the development of agricultural markets and linking farmers to those markets. Since 2005 and 2006, the government of Malawi has implemented a large- scale national program that distributes vouchers to about 50 percent of the country’s farmers so that they can purchase agricultural inputs—such as fertilizer, seeds, and pesticides—at highly discounted prices. USAID has supported operations that use targeted vouchers to accelerate short-term relief operations following conflicts or disasters. However, according to USAID, the provision of cheaper fertilizer and seeds does not address the fundamental problem—that poor farmers cannot afford fertilizer on their own and, furthermore, without improvements in irrigation, investments in fertilizer would not pay off in drought years in a country like Malawi, where agriculture is mainly rain-fed. In the face of growing malnutrition worldwide, the international community has established ambitious goals toward halving global hunger, including significant financial commitments to increase aid for agriculture and food security. Given the size of the problem and how difficult it has historically been to address it, this effort will require a long-term, sustained commitment on the part of the international donor community, including the United States. As part of this initiative, and consistent with a prior GAO recommendation, the United States has committed to harnessing the efforts of all relevant U.S. agencies in a coordinated and integrated governmentwide approach. The administration has made important progress toward realizing this commitment, including providing high-level support across multiple government agencies. However, the administration’s efforts to develop an integrated U.S. governmentwide strategy for global food security have two key vulnerabilities: (1) the lack of readily available comprehensive data across agencies and (2) the risks associated with the host country-led approach. Given the complexity and long-standing nature of these concerns, there should be no expectation of quick and easy solutions. Only long-term, sustained efforts by countries, institutions, and all relevant entities to mitigate these concerns will greatly enhance the prospects of fulfilling the international commitment to halve global hunger. In the report issued today, we recommended that the Secretary of State (1) work with the existing NSC Interagency Policy Committee to develop an operational definition of food security that is accepted by all U.S. agencies; establish a methodology for consistently reporting comprehensive data across agencies; and periodically inventory the food security-related programs and associated funding for each of these agencies; and (2) work in collaboration with relevant agency heads to delineate measures to mitigate the risks associated with the host country- led approach on the successful implementation of the forthcoming governmentwide global food security strategy. Four agencies—State, Treasury, USAID, and USDA—provided written comments on our report and generally concurred with our recommendations. With regard to our first recommendation, State and USAID agreed that developing an operational definition of food security that is accepted by all U.S. agencies would be useful. With regard to our second recommendation, the four agencies noted that the administration recognizes the risks associated with a host country-led approach and that they are taking actions to mitigate these risks. Madam Chairwoman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. Should you have any questions about this testimony, please contact Thomas Melito at (202) 512-9601, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Phillip J. Thomas (Assistant Director), Joy Labez, Sada Aksartova, Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Brian Egger, Etana Finkler, Amanda Hinkle, and Ulyana Panchishin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Global hunger continues to worsen despite world leaders' 1996 pledge--reaffirmed in 2000 and 2009--to halve hunger by 2015. To reverse this trend, in 2009 major donor countries pledged about $22.7 billion in a 3-year commitment to agriculture and food security in developing countries, of which $3.5 billion is the U.S. share. This testimony addresses (1) the types and funding of food security programs and activities of relevant U.S. government agencies and (2) progress in developing an integrated U.S. governmentwide strategy to address global food insecurity and the strategy's potential vulnerabilities. This is based on a new GAO report being released at today's hearing (GAO-10-352). The U.S. government supports a wide variety of programs and activities for global food security, but lacks readily available comprehensive data on funding. In response to GAO's data collection instrument to 10 agencies, 7 agencies reported such funding for global food security in fiscal year 2008 based on the working definition GAO developed for this exercise with agency input. USAID and USDA reported the broadest array of programs and activities, while USAID, the Millennium Challenge Corporation, Treasury, USDA, and State reported providing the highest levels of funding for global food security. The 7 agencies together directed at least $5 billion in fiscal year 2008 to global food security, with food aid accounting for about half of that funding. However, the actual total is likely greater. GAO's estimate does not account for all U.S. government funds targeting global food insecurity because the agencies lack (1) a commonly accepted governmentwide operational definition of global food security programs and activities as well as reporting requirements to routinely capture data on all relevant funds, and (2) data management systems to track and report food security funding comprehensively and consistently. The administration is making progress toward finalizing a governmentwide global food security strategy--expected to be released shortly--but its efforts are vulnerable to data weaknesses and risks associated with the strategy's host country-led approach. The administration has established interagency coordination mechanisms at headquarters and is finalizing an implementation document and a results framework. However, the lack of comprehensive data on programs and funding levels may deprive decision makers of information on available resources and a firm baseline against which to plan. Furthermore, the host country-led approach, although promising, is vulnerable to (1) the weak capacity of host governments, which can limit their ability to sustain donor-funded efforts; (2) a shortage of expertise in agriculture and food security at U.S. agencies that could constrain efforts to help strengthen host government capacity; and (3) policy differences between host governments and donors, including the United States, may complicate efforts to align donor interventions with host government strategies. |
USCIS has reduced TNCs from about 8 percent for the period June 2004 through March 2007 to about 2.6 percent in fiscal year 2009. As shown in figure 1, in fiscal year 2009, about 2.6 percent or over 211,000 of newly hired employees received either a SSA or USCIS TNC, including about 0.3 percent who were determined to be work eligible after they contested a TNC and resolved errors or inaccuracies in their records, and about 2.3 percent, or about 189,000, who received a final nonconfirmation because their employment eligibility status remained unresolved. For the approximately 2.3 percent who received a final nonconfirmation, USCIS was unable to determine how many of these employees (1) were authorized employees who did not take action to resolve a TNC because they were not informed by their employers of their right to contest the TNC, (2) independently decided not to contest the TNC, or (3) were not eligible to work. USCIS has reduced TNCs and increased E-Verify accuracy by, among other things, expanding the number of databases that E-Verify can query and instituting quality control procedures to screen for data entry errors. However, erroneous TNCs continue to occur, in part, because of inaccuracies and inconsistencies in how personal information is recorded on employee documents, in government databases, or both. Some actions have been taken to address name-related TNCs, but more could be done. Specifically, USCIS could better position employees to avoid an erroneous TNC by disseminating information to employees on the importance of providing consistent name information and how to record their names consistently. In our December 2010 report, we recommended that USCIS disseminate information to employees on the potential for name mismatches to result in erroneous TNCs and how to record their names consistently. USCIS concurred with our recommendation and outlined actions to address it. For example, USCIS commented that in Novemb 2010 it began to distribute the U.S. Citizenship Welcome Packet at all naturalization ceremonies to advise new citizens to update their records with SSA. USCIS also commented that it has commissioned a study, to b completed in the third quarter of fiscal year 2011, to determine how to enhance its name-matching algorithms. USCIS’s actions for reducing the likelihood of name-related erroneous TNCs are useful steps, but they do not fully address the intent of the recommendation because they do not provide specific information to employees on how to prevent a name- related TNC. See our December 2010 report for more details. In addition, identity fraud remains a challenge because employers may be able to determine if employees are presenting genuine identity and employment eligibility documents that are borrowed or stolen. E-Ver also cannot detect cases in which an unscrupulous employer assists unauthorized employees. USCIS has taken actions to address fraud, mos notably with the fiscal year 2007 implementation of the photo match ing tool for permanent residency cards and employment authorization documents and the September 2010 addition to the matching tool of passport photographs. Although the photo tool has some limitations, it can help reduce some fraud associated with the use of genuine documents in which the original photograph is substituted for another. To help comba identity fraud, USCIS is also seeking to obtain driver’s license data from states and planning to develop a program that would allow victims of identity theft to “lock” their Social Security numbers within E-Verify until they need them to obtain employment authorization. Combating identity fraud through the use of biometrics, such as through fingerprint or facial t recognition, has been included in proposed legislation before Congress an element of comprehensive immigration reform, but implementing a biometric system has its own set of challenges, including those associated with cost and civil liberties. Resolving these issues will be important if this technology is to be effectively implemented in combating identity fraud in the employment verification process. An effective employment authorization system requires a credible worksite enforcement program to ensure employer compliance with applicable immigration laws; however, USCIS is challenged in ensuring employer compliance with E-Verify requirements for several reasons. F example, USCIS cannot monitor the extent to which employers follow program rules because USCIS does not have a presence in emplo yers’ workplaces. USCIS is further limited by its existing technology infrastructure, which provides limited ability to analyze patterns and trends in the data that could be indicative of employer misuse of E-Verify USCIS has minimal avenue for recourse if employers do not respond or remedy noncompliant behavior after a contact from USCIS compliance staff because it has limited authority to investigate employer misuse a no authority to impose penalties against such employers, other tha terminating those who knowingly use the voluntary system for a n unauthorized purpose. For enforcement action for violations of immigration laws, USCIS relies on U.S. Immigration and Customs Enforcement (ICE) to investigate, sanction, and prosecute employers. However, ICE has reported that it has limited resources to investigate an sanction employers that knowingly hire unauthorized workers or those that knowingly violate E-Verify program rules. Instead, according to senior ICE officials, ICE agents seek to maximize limited resources by applying risk assessment principles to worksite enforcement cases and and removing unauthorized workers from critical focusing on detecting infrastructure sites. USCIS has taken actions to institute safeguards for the privacy of personal information for employees who are processed through E-Verify, but has not established mechanisms for employees to identify and access personal information maintained by DHS that may lead to an erroneous TNC, or for E-Verify staff to correct such information. To safeguard the privacy of personal information for employees who are processed through E-Verify, USCIS has addressed the Fair Information Practice Principles, which are the basis for DHS’s privacy policy. For example, USCIS published privacy notices in 2009 and 2010 that defined parameters, including setting limits on DHS’s collection and use of personal information for the E-Verify program. Notwithstanding the efforts made by USCIS to address privacy concerns, employees are limited in their ability to identify and access personal information maintained by DHS that may lead to an erroneous TNC. In our December 2010 report, we recommended that USCIS develop procedures to enable employees to access personal information and correct inaccuracies or inconsistencies in such information within DHS databases. USCIS concurred and identified steps that it is taking to address this issue, such as developing a pilot program to assist employees receiving TNCs to request a records update and referring individuals who receive a TNC to local USCIS or U.S. Customs and Border Protection offices and ports of entry to correct records when inconsistent or inaccurate information is identified. In part to address this recommendation, in March 2011, USCIS began implementing a Self-Check program to allow individuals to check their own work authorization status against SSA and DHS databases prior to applying for a job. We recognize that these efforts may be a step in the right direction, but they do not fully respond to our recommendation. This is because, among other things, USCIS does not have operating procedures in place for USCIS staff to explain to employees what personal information produced the TNC or what specific steps they should take to correct the information. We encourage USCIS to continue its efforts to develop procedures enabling employees to access and correct inaccurate and inconsistent personal information in DHS databases. USCIS and SSA have taken actions to prepare for possible mandatory implementation of E-Verify for all employers nationwide by addressing key practices for effectively managing E-Verify system capacity and availability and coordinating with each other in operating E-Verify. However, USCIS and SSA face challenges in accurately estimating E-Verify costs. Our analysis showed that USCIS’s E-Verify estimates partially met three of four characteristics of a reliable cost estimate and minimally met one characteristic. As a result, we found that USCIS is at increased risk of not making informed investment decisions, understanding system affordability, and developing justifiable budget requests for future E-Verify use and potential mandatory implementation of it. To ensure that USCIS has a sound basis for making decisions about resource investments for E- Verify and securing sufficient resources, in our December 2010 report, we recommended that the Director of USCIS ensure that a life-cycle cost estimate for E-Verify is developed in a manner that reflects the four characteristics of a reliable estimate consistent with best practices. USCIS concurred and senior program officials told us that USCIS, among other things, has contracted with a federally funded research and development center to develop an independent cost estimate of the life-cycle costs of E- Verify to better comply with our cost-estimating guidance. Our analysis showed that SSA’s E-Verify estimates substantially met three of four characteristics of a reliable cost estimate. However, we found that SSA’s cost estimates are partially credible because SSA may not be able to provide assurance to USCIS that it can provide the required level of support for E-Verify operations if it experiences cost overruns within any one fiscal year. In our December 2010 report, we recommended that the Commissioner of SSA assess the risk around SSA’s E-Verify workload estimate, in accordance with best practices, to ensure that SSA can accurately project costs associated with its E-Verify workload and provide the required level of support to USCIS and E-Verify operations. SSA did not concur, and stated that it assesses the risk around its workload cost estimates and, if E-Verify were to become mandatory, SSA would adapt its budget models and recalculate estimated costs based on the new projected E-Verify workload volume. As discussed in our December 2010 report, however, SSA does not conduct a risk and uncertainty analysis that uses statistical models to quantitatively determine the extent of variability around its cost estimate or identify the limitations associated with the assumptions used to create the estimate. Thus, we continue to believe that SSA should adopt this best practice for estimating risks to help it reduce the potential for experiencing cost overruns for E-Verify. Chairman Johnson, Ranking Member Becerra, and Members of the Subcommittee, this concludes my prepared statement. I will be pleased to respond to any questions you may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Evi Rezmovic, Assistant Director; Sara Margraf; and Michelle Woods. Additionally, key contributors to our December 2010 report include Blake Ainsworth, David Alexander, Tonia Brown, Frances Cook, Marisol Cruz, John de Ferrari, Julian King, Danielle Pakdaman, David Plocher, Karen Richey, Robert Robinson, Douglas Sloane, Stacey Steele, Desiree Cunningham, Vanessa Taylor, Teresa Tucker, and Ashley Vaughan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the E-Verify program, which provides employers a tool for verifying an employee's authorization to work in the United States. The opportunity for employment is one of the most powerful magnets attracting immigrants to the United States. According to the Pew Hispanic Center, as of March 2010, approximately 11.2 million unauthorized immigrants were living in the country, and an estimated 8 million of them, or about 70 percent, were in the labor force. Congress, the administration, and some states have taken various actions to better ensure that those who work here have appropriate work authorization and to safeguard jobs for authorized employees. Nonetheless, opportunities remain for unauthorized workers to fraudulently obtain employment by using borrowed or stolen documents and for unscrupulous employers to hire unauthorized workers. Immigration experts have noted that deterring illegal immigration requires, among other things, a more reliable employment eligibility verification process and a more robust worksite enforcement capacity. E-Verify is a free, largely voluntary, Internet-based system operated by the Verification Division of the Department of Homeland Security's U.S. Citizenship and Immigration Services (USCIS) and the Social Security Administration (SSA). The goals of E-Verify are to (1) reduce the employment of individuals unauthorized to work, (2) reduce discrimination, (3) protect employee civil liberties and privacy, and (4) prevent undue burden on employers. Pursuant to a 2007 Office of Management Budget directive, all federal agencies are required to use E-Verify on their new hires and, as of September 2009, certain federal contractors and subcontractors are required to use E-Verify for newly hired employees working in the United States as well as existing employees working directly under the contract. A number of states have also mandated that some or all employers within the state use E-Verify on new hires. From October 1, 2010, through April 5, 2011, E-Verify processed approximately 7.8 million queries from nearly 258,000 employers. In an August 2005 report and June 2008 testimony on E-Verify, we noted that USCIS faced challenges in detecting identity fraud and ensuring employer compliance with the program's rules. We highlighted some of the challenges USCIS and SSA faced in reducing instances of erroneous tentative nonconfirmations (TNC), or situations in which work-authorized employees are not automatically confirmed by E-Verify. We also noted that mandatory implementation of E-Verify would place increased demands on USCIS's and SSA's resources. This testimony is based primarily on a report we issued in December 2010 and provide updates to the challenges we noted in our 2005 report and 2008 testimony. The statement, as requested, highlights findings from that report and discusses the extent to which (1) USCIS has reduced the incidence of TNCs and E-Verify's vulnerability to fraud, (2) USCIS has provided safeguards for employees' personal information, and (3) USCIS and SSA have taken steps to prepare for mandatory E-Verify implementation. Our December 2010 report also includes a discussion of the extent to which USCIS has improved its ability to monitor and ensure employer compliance with E-Verify program policies and procedures. USCIS has reduced TNCs from about 8 percent for the period June 2004 through March 2007 to about 2.6 percent in fiscal year 2009. In fiscal year 2009, about 2.6 percent or over 211,000 of newly hired employees received either a SSA or USCIS TNC, including about 0.3 percent who were determined to be work eligible after they contested a TNC and resolved errors or inaccuracies in their records, and about 2.3 percent, or about 189,000, who received a final nonconfirmation because their employment eligibility status remained unresolved. For the approximately 2.3 percent who received a final nonconfirmation, USCIS was unable to determine how many of these employees (1) were authorized employees who did not take action to resolve a TNC because they were not informed by their employers of their right to contest the TNC, (2) independently decided not to contest the TNC, or (3) were not eligible to work. USCIS has taken actions to institute safeguards for the privacy of personal information for employees who are processed through E-Verify, but has not established mechanisms for employees to identify and access personal information maintained by DHS that may lead to an erroneous TNC, or for E-Verify staff to correct such information. To safeguard the privacy of personal information for employees who are processed through E-Verify, USCIS has addressed the Fair Information Practice Principles, which are the basis for DHS's privacy policy. For example, USCIS published privacy notices in 2009 and 2010 that defined parameters, including setting limits on DHS's collection and use of personal information for the E-Verify program. USCIS and SSA have taken actions to prepare for possible mandatory implementation of E-Verify for all employers nationwide by addressing key practices for effectively managing E-Verify system capacity and availability and coordinating with each other in operating E-Verify. However, USCIS and SSA face challenges in accurately estimating E-Verify costs. Our analysis showed that USCIS's E-Verify estimates partially met three of four characteristics of a reliable cost estimate and minimally met one characteristic. As a result, we found that USCIS is at increased risk of not making informed investment decisions, understanding system affordability, and developing justifiable budget requests for future E-Verify use and potential mandatory implementation of it. To ensure that USCIS has a sound basis for making decisions about resource investments for E-Verify and securing sufficient resources, in our December 2010 report, we recommended that the Director of USCIS ensure that a life-cycle cost estimate for E-Verify is developed in a manner that reflects the four characteristics of a reliable estimate consistent with best practices. USCIS concurred and senior program officials told us that USCIS, among other things, has contracted with a federally funded research and development center to develop an independent cost estimate of the life-cycle costs of E-Verify to better comply with our cost-estimating guidance. |
NIH funds extramural research primarily through grants. In 2015, NIH funded approximately 50,000 research projects, totaling over $22 billion to research organizations, through grants and cooperative agreements for extramural research. Of the six types of organizations conducting extramural research, NIH provided approximately $17 billion of funding to universities in fiscal year 2015. See figure 1 for a breakout of direct and indirect cost reimbursements received by these research organizations that were funded through grants and cooperative agreements. NIH generally reimburses for both direct and indirect costs incurred by the research organizations conducting extramural grant research. Direct costs are specifically attributed to research projects and include, for example, a researcher’s salary, equipment cost, and travel. Indirect costs represent an organization’s general support expenses that cannot be specifically attributed to a specific research project or function. They include, for example, administrative staff salaries, building utilities, and library operations. Because indirect costs cannot be specifically attributed to a particular research project, they are allocated via an indirect cost rate that is applied to certain direct costs for each awarded grant. NIH then uses the negotiated indirect cost rate to reimburse indirect cost expenses to the organization. In fiscal year 2015, indirect costs represented approximately $6.3 billion (28 percent) of NIH’s grants and cooperative agreement award totals. Each research organization that applies for NIH funding develops a proposed indirect cost rate and subsequently negotiates with its designated cognizant agency to set an indirect cost rate used for reimbursement. Because many research organizations perform research for multiple federal agencies, OMB guidance specifies that the designated cognizant agency for a particular research organization is responsible for negotiating and approving indirect cost rate proposals on behalf of all other federal agencies. As noted above, OMB and HHS have designated three primary cognizant agencies to negotiate indirect cost rates for research organizations that receive NIH-funded grants: (1) CAS, (2) NIH- DFAS, and (3) ONR. For fiscal year 2014 NIH-funded grants, CAS negotiated rates for approximately 460 universities, 150 research institutes, and 90 hospitals; NIH-DFAS negotiated rates for approximately 190 for-profit organizations; and ONR negotiated rates for 25 universities and 5 nonprofit organizations. The cognizant agencies are responsible for ensuring that the negotiated indirect cost rates comply with OMB guidance and the FAR, as applicable. A research organization’s proposed indirect cost rate is essentially the ratio of its total indirect costs (after adjustments) to its total direct costs related to all of the research organization’s grants for a particular time period. To calculate its indirect cost rate, a research organization divides its total claimed indirect costs (e.g., equipment and maintenance) across the organization by its total direct costs (e.g., researcher’s salary and travel), referred to as the distribution base, across all of the research organization’s grants. The resulting percentage is the proposed indirect cost rate. This proposed rate is then negotiated with the cognizant agency until they agree upon a negotiated indirect cost rate. To determine how much of the research organization’s indirect costs should be allocated to each research grant, each grant’s direct cost amount is multiplied by the negotiated indirect cost rate. In this way, if a particular research grant contributes a higher percentage of the organization’s direct costs, it will share in a proportionally higher percentage of the indirect costs as well. See figure 2 for an example calculation of an indirect cost rate and allocation at a hypothetical research organization. This example presumes that the cognizant agency approves the proposed rate as the negotiated rate without further adjustment. The indirect cost rate-setting process begins when a research organization submits an indirect cost rate proposal with supporting documentation to its cognizant agency based on a combination of historical and estimated costs. Examples of supporting documentation include proposals, audited financial statements, Single Audit reports, and certification of indirect cost rates for universities and for-profit organizations. CAS and NIH-DFAS review proposals whereas ONR generally sends proposals to the Defense Contract Audit Agency (DCAA) to be audited. When reviewing a proposal, the cognizant agency is to verify the mathematical accuracy of the rates proposed, confirm unallowable costs have been excluded, reconcile the cost proposal to the audited financial statements, and determine the reasonableness of the proposed costs. After the proposal has been reviewed or audited, the cognizant agency and the research organization negotiate and come to an agreement on the rate. The negotiated rate is then documented in a formal indirect cost rate agreement to be applied to all grants going forward. Generally, the negotiated rate is set for a 1-to-4-year time frame. This process is illustrated in figure 3. The cognizant agencies are required to negotiate the indirect cost rates in accordance with applicable federal guidance and regulations. OMB Circular No. A-21 (Cost Principles for Educational Institutions); OMB Circular No. A-122 (Cost Principles for Non-Profit Organizations); OMB’s Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance); and FAR Part 31 (Contract Cost Principles and Procedures, applicable to for-profit organizations) establish the principles for defining, calculating, and negotiating indirect cost rates for applicable grants. Specifically, the guidance and regulations describe classification and types of allowable indirect costs; methods of allocating such costs; reasonableness of claimed costs; exclusions and descriptions of unallowable cost elements, such as alcohol and bad debts; and guidelines for establishing indirect cost rates. GAO’s A Framework for Managing Fraud Risks in Federal Programs identifies leading practices to aid program managers in managing fraud risks. For example, A Framework for Managing Fraud Risks in Federal Programs states that management should design and implement data analytics as a control activity to prevent and detect fraud. Further, Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control across the federal government and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Standards for Internal Control in the Federal Government states that internal controls comprise the plans, methods, and procedures used to meet missions, goals, and objectives. Management is responsible for developing detailed internal guidance and practices to fit the agency’s operations and ensuring that they are integral to operations. GAO and ONR’s internal review team have each issued one report in recent years with recommendations related to the indirect cost rate- setting processes for the cognizant agencies. GAO reviewed DOD’s processes and procedures related to indirect costs for research and made one recommendation. During its review, GAO found a wide variation in indirect cost rates at universities receiving DOD funding because of inconsistencies in the rate-setting and reimbursement processes. The report concluded that with inconsistencies in rate-setting and reimbursement processes and weakness in oversight methods, DOD lacks assurance that it is reimbursing indirect costs appropriately. GAO’s recommendation directed the Secretary of Defense to assess the current level of audit coverage for monitoring DOD’s indirect cost reimbursement for universities and determine what level is sufficient and whether to expand use of closeout and other audits to oversee compliance. ONR’s internal review office assessed the effectiveness of its Indirect Cost Branch’s processes. While the overall rating was satisfactory, ONR’s review office made one recommendation related to the performance standards and procurement processes and one recommendation to complete a staffing analysis. ONR implemented GAO’s recommendation, but ONR’s two internal review recommendations remain unimplemented. Under OMB guidance, the cognizant agencies are responsible for reviewing, negotiating, and approving indirect cost rate proposals on behalf of all other federal agencies. Consequently, NIH relies on the cognizant agencies to design adequate internal controls over their processes for negotiating indirect cost rates with research organizations that support the various types of NIH extramural research programs. Since NIH reimburses the indirect costs for NIH grants based on the negotiated rates, the controls that the cognizant agencies have designed are essential to help protect NIH funds against fraud, waste, and abuse in the indirect cost rate-setting process. Based on our review of OMB guidance and the FAR, we identified controls that are key to preventing fraud, waste, and abuse in the indirect cost rate-setting process. These controls included steps for the negotiator to (1) determine the allowability, allocability, and reasonableness of proposed indirect costs and assess a research organization’s methods for allocating such costs to federally funded research grants; (2) determine the composition of the distribution base; and (3) maintain sufficient documentation to support the negotiation and calculation of the indirect cost rates. Our review determined that while the three cognizant agencies had designed some controls for setting indirect cost rates, deficiencies in the design of these controls could result in the waste of federal resources. The deficiencies we identified are as follows: internal guidance at all three cognizant agencies was not updated to reflect current OMB guidance or changes in agency requirements, such as documentation requirements or formalizing agency internal guidance; ONR’s internal guidance lacked adequate review procedures over DCAA’s advisory audit function when a DCAA audit was significantly delayed or resulted in a qualified opinion or when DCAA rescinded a previously issued opinion; internal guidance at all three cognizant agencies lacked detailed instructions to supervisors on their review responsibilities over the indirect cost rate process; CAS and ONR had not developed internal guidance addressing differences in negotiating indirect cost rates with certain types of research organizations; and ONR and NIH-DFAS had not developed mechanisms to track key milestones for the indirect cost rate-setting process. Standards for Internal Control in the Federal Government states that management should develop internal guidance to help ensure that management directives are carried out and effective and efficient control activities are implemented to accomplish the agency’s objectives. The standards also state that information should be recorded and communicated to staff within a time frame that enables the staff to carry out their responsibilities. We found that CAS and ONR had not updated their internal guidance to reflect a majority of the federal regulations issued as the Uniform Guidance, which became effective for grants awarded on or after December 26, 2014. Working with the Council on Financial Assistance Reform, OMB consolidated its grants management circulars into the Uniform Guidance to streamline its guidance, promote consistency among grantees, and reduce administrative burden on nonfederal entities. New requirements include, for example, (1) the option for research organizations that currently have a negotiated rate to apply for a onetime extension of that rate for a period of up to 4 more years, (2) the option for research organizations that have never negotiated an indirect cost rate to use a de minimis indirect cost rate of 10 percent in lieu of negotiating an indirect cost rate with the cognizant agency, and (3) additional provisions for administrative costs to be counted as direct costs when allocable to a specific grant. CAS officials stated that they issued a memo in July 2015 to the negotiators providing guidance covering one of the new Uniform Guidance requirements which allows for a one-time rate extension. However, they have not updated their internal guidance to reflect other new applicable requirements from the Uniform Guidance. According to officials at CAS and ONR, they were waiting for OMB to finalize the corresponding technical corrections and frequently asked questions and thus had not updated their internal guidance to reflect the changes in the Uniform Guidance as of June 2016. Further, CAS officials stated that in addition to waiting for technical corrections and frequently asked questions to be finalized, they were also waiting for HHS’s Office of Grants Policy, Oversight and Evaluation to update the HHS Grants Policy Statement to reflect the changes in the Uniform Guidance. The Grants Policy Statement covers indirect cost policy for HHS grants and has not been updated since 2007. However, OMB officials stated that in meetings held in 2015 with the cognizant agencies, they requested that the cognizant agencies update their internal guidance to reflect the requirements of the Uniform Guidance. Figure 4 displays a timeline of key milestones during the implementation process of the Uniform Guidance. If they do not incorporate the new Uniform Guidance and its applicable regulatory requirements in their internal guidance, CAS and ONR risk not properly and consistently negotiating indirect cost rates—potentially leading to waste of government resources. CAS and ONR as well as research organizations stand to benefit from incorporating elements of the Uniform Guidance, such as ONR extending negotiated rates for up to 4 years and both allowing the use of a de minimis indirect cost rate of 10 percent. These two options reduce administrative burden to both parties, making the process more efficient and potentially reducing the waste of government resources. Also, while OMB guidance has allowed for charging salaries of administrative and clerical staff as a direct cost when appropriate and when certain conditions in the guidance are met, the Uniform Guidance has clarified the existing conditions and added new requirements describing when administrative costs can be charged as direct. For example, a new requirement states that costs must now be explicitly included in the budget or have the prior written approval of the federal awarding agency before they can be charged as direct costs. Without updated internal guidance to reflect those provisions, CAS and ONR may incorrectly negotiate this element as direct in cases where it does not meet the conditions outlined in federal guidance. Further, we found that the cognizant agencies changed certain agency requirements, such as documentation requirements, when reviewing indirect cost rate proposals, but did not update their internal guidance to reflect such changes. For example: Individual CAS field offices have established checklists, which provide procedural steps for the negotiators to follow and acknowledge completion of during their review of the research organization’s indirect cost proposal. These checklists include steps such as reconciling the proposal to the audited financial statements, confirming that the rates are mathematically accurate, verifying the consistency of the distribution base with prior years, and confirming that unallowable expenses have been removed from the indirect costs claimed. We found that although CAS considers use of these checklists to be a control over the negotiation process, CAS has not updated its internal guidance to require negotiators to use a checklist or established a standardized checklist for all offices to use when negotiating indirect cost rates. Our walk-throughs of CAS case files determined that field offices have developed their own checklists based on their interpretations of negotiation requirements found in CAS’s internal guidance, and that these interpretations were not always consistent. Although these checklists contained similar review steps that the negotiators were to perform, we found differences in instructions for performing trend analyses and reviewing the indirect and direct costs claimed for allowable and unallowable costs, which are two key controls for minimizing the potential for fraud, waste, and abuse. Without a standardized checklist and agency-wide internal guidance instructing negotiators to use the standardized checklist during negotiation, management cannot reasonably assure that negotiators and supervisors are consistently and fairly negotiating indirect cost rates across all CAS field offices. According to CAS officials, they are in the process of finalizing review checklists for nonprofit organizations, which include research institutes and universities using the simplified proposal method and they expect the checklists to be implemented by the end of the fiscal year 2016. For NIH-DFAS, we were unable to determine when primary guidance was last updated because NIH-DFAS did not have formalized internal guidance with official dates and signatures. Its internal guidance lacked key characteristics—such as a policy number, purpose of the policy, effective date, and approving official—that are normally included in formal policies and procedures. NIH-DFAS officials acknowledged that their internal guidance did not contain the key characteristics of internal control. These officials stated that in June 2014, NIH’s Administrative and Program Resources Office finalized the standard operating procedures for developing formalized internal guidance, and in October 2015, NIH-DFAS hired an outside contractor to formalize NIH-DFAS’s internal guidance, which the officials anticipate will be completed by December 2016. Standards for Internal Control in the Federal Government requires management to ensure that internal guidance and practices are integral to their operations. We found that these deficiencies occurred because the cognizant agencies had not designed monitoring controls to ensure that internal guidance had been reviewed and updated to reflect changes in federal regulations and internal procedures. According to CAS officials, they have not designed monitoring controls to establish time frames for periodic review and assign related roles and responsibilities because (1) they update internal guidance as needed and (2) staff already know their roles and responsibilities. According to ONR officials, in May 2014, ONR issued a policy that requires staff to review and update their internal guidance annually. However, ONR’s internal guidance was last reviewed and updated in January 2011, so the Uniform Guidance and its applicable regulatory requirements have not been incorporated. Without established controls for reviewing and updating internal guidance, including setting time frames and assigning roles and responsibilities, there is an increased risk that management will not hold staff accountable. The officials acknowledged that their current internal guidance needed to be updated to reflect changes to applicable federal guidance, regulations, or agency procedural changes, such as formalizing internal guidance. By not ensuring that their internal guidance reflects current regulations and procedures, the agencies increase their risk that their indirect cost rate- setting processes may not be properly and consistently carried out by the negotiators, which in turn increases the risk that rates negotiated will not be in accordance with federal guidance and regulations, thus increasing the potential for fraud, waste, and abuse of federal resources. Unless other arrangements have been made, ONR negotiators request a DCAA advisory audit on all indirect cost rate proposals and rely on DCAA to issue an opinion on the adequacy and compliance of a research organization’s indirect cost proposal prior to negotiating the indirect cost rate. Standards for Internal Control in the Federal Government states that controls should be designed to help ensure that management’s directives are carried out in order to accomplish the agency’s control objectives. While ONR’s internal guidance requires taking into account DCAA results and comments, it does not specify review procedures to take when DCAA (1) is unable to complete its advisory audit within a reasonable time frame, (2) issues a qualified opinion, or (3) rescinds one of its previously issued audit opinions. To negotiate indirect cost rates timely, ONR’s internal guidance requires its negotiators to provide DCAA with 45 days to complete its advisory audit report on behalf of ONR, unless arrangements have been made for a due date beyond 45 days (e.g., DCAA requests an extension). In a March 2015 report to Congress, DCAA officials stated that depending on the type of audit, the average elapsed time to complete an audit ranges from 95 to 1,006 days. This high number is the result of a growing backlog of DCAA audits. In light of this backlog, ONR officials told us that their current procedure is to routinely give DCAA additional time to complete its initial audit report of a research organization’s cost proposals. For example, in our walk- through of ONR case files, we found that ONR initially granted as many as 363 days—about a year—for DCAA to complete its audit, well beyond the 45 days period in ONR’s internal guidance. In many cases, DCAA requested and received an extension to complete its audit of a research organization’s cost proposal. Our walk-throughs identified instances in which ONR granted up to 139 days on top of what was initially given to DCAA to complete its audit. ONR’s internal guidance does not reflect this change in procedure of routinely granting DCAA more than 45 days to prepare an advisory audit report, and it does not include parameters defining a reasonable period for the initial extensions beyond the 45 days. In addition, ONR’s internal guidance does not provide procedures for negotiators to perform the necessary audit steps if DCAA does not complete its audit within the required time frames or a reasonable and accepted extended period. By not including in its internal guidance reasonable and acceptable audit completion time frames and procedures for ONR negotiators who perform audit steps when DCAA does not complete an audit, management cannot reasonably assure that indirect cost rate negotiations will be accurate and completed in a timely manner. Furthermore, we found that although ONR’s internal guidance requires negotiators to review DCAA’s audit results, it does not contain procedures for the negotiators to perform supplemental review steps when DCAA’s audit results contain qualified opinions or when DCAA rescinds one of its previously issued audit opinions. DCAA issues a qualified opinion when it is unable to perform detailed transaction testing on the incurred costs that are often used as the research organization’s basis of estimate or when it does not have enough time to complete an audit. Our walk-throughs identified several instances of DCAA audit reports with qualified opinions (in which DCAA was not able to perform detailed transaction testing on incurred costs used as a basis of estimate) and one instance in which DCAA rescinded an audit opinion (when DCAA’s Integrity and Quality Assurance Directorate review found that DCAA’s audit was not performed in accordance with generally accepted government auditing standards). In those cases, ONR’s guidance does not require the negotiator to perform any supplemental procedures to validate costs. Without internal guidance regarding supplemental procedures to be taken when DCAA issues a qualified opinion or rescinds one of its previously issued audit opinions, ONR cannot reasonably assure that its negotiators take consistent and appropriate steps, potentially increasing the risk that ONR’s negotiated indirect cost rates could include reimbursement for unallowable, unallocable, and unreasonable costs. We found that these deficiencies occurred because ONR had not designed sufficient review procedures for those situations when DCAA (1) is unable to complete its advisory audit within a reasonable time frame, (2) issues a qualified opinion, or (3) rescinds one of its previously issued audit opinions. According to ONR officials, they believe that the procedures listed in their internal guidance will allow negotiators to validate costs prior to negotiating an indirect cost rate. However, by not including in its internal guidance reasonable and acceptable audit completion time frames and supplemental procedures to be taken to review cost proposals when DCAA cannot perform its audits in a timely manner, issues a qualified opinion, or rescinds one of its previously issued audit opinions, there is an increased risk that the negotiated indirect cost rate may not comply with applicable regulations, thus increasing the risk of fraud, waste, and abuse of federal resources. Although the cognizant agencies’ internal guidance describes broad procedures for supervisory review, such as requiring that the supervisors review and approve workpapers and applicable rates, these documents do not include detailed procedures for supervisors to monitor the negotiators’ processes. Supervisory review is a type of internal control that provides reasonable assurance that the negotiator followed agency procedures for negotiating indirect cost rates and reduces the risk of inaccuracy and potential for waste of government resources. Standards for Internal Control in the Federal Government states that management should design internal controls to reasonably assure that ongoing monitoring occurs in the course of normal operations, which includes supervisory activities. We found that none of the three cognizant agencies’ internal guidance include adequate detailed procedures that would allow supervisors to confirm that the negotiators have adequately performed and documented key controls identified in their internal guidance, such as performing cost trend analyses, executing reconciliations, analyzing adjustments for unallowable costs, verifying the accuracy of the distribution base, evaluating any future significant changes, and comparing the proposal to the disclosed cost accounting practices in the approved Disclosure Statement. These controls are meant to provide reasonable assurance that only allowable, allocable, and reasonable indirect costs have been proposed and that appropriate distribution bases were selected for allocating such costs to federally funded grants. However, the cognizant agencies’ internal guidance did not specifically require supervisors to reasonably assure that negotiators perform those steps. In fact, as illustrated by the examples below, our walk-throughs of case files for the three cognizant agencies found a few instances in which supervisors approved rates that were set by negotiators who did not perform one or more of the controls required by the cognizant agencies’ internal guidance. As a tool used to assess risk, CAS’s internal guidance requires negotiators to complete a detailed trend analysis of nonprofit and long form research organizations’ indirect cost rates and distribution bases for the last 3 years, including the proposal year. According to CAS’s internal guidance, a trend analysis provides the negotiator with an insight into the areas of the proposal needing a more detailed review. During our walk-through of CAS case files, we found that the negotiators did not always review the indirect cost rates and the distribution bases for the last 3 years, including the proposal year, when preparing a trend analysis. Additionally, CAS’s internal guidance requires the negotiator to compare the accounting policies delineated in the research organization’s Disclosure Statement to the indirect cost proposal to ensure consistency in the proposal’s preparation. However, during our walk-through of CAS case files, we found that when the supervisors reviewed supporting documentation, they did not confirm that the negotiators had performed this comparison. OMB guidance states that each institution must describe the process it uses to ensure that federal funds are not used to subsidize industry and foreign government programs. To meet this requirement, ONR’s submission requirements checklist states that institutions must complete and submit several certifications and assurances, including a Statement of Assurance attesting that federal funds are not used to subsidize industry and foreign government programs, which cites the OMB guidance. During our walkthrough of ONR’s case files, we found one instance in which a university did not submit the required Statement of Assurance. The ONR supervisor did not confirm that all required certifications and assurances were included in the grantee’s proposal during his review of the supporting documentation. At NIH-DFAS, we found one instance in which the negotiator had not reconciled the indirect cost information contained in the cost proposal to the audited financial statements, which is a control that helps verify that actual costs incurred were used as a basis in the proposal. The NIH-DFAS supervisor did not verify that the negotiator had reconciled the cost information. Agency officials at the three cognizant agencies stated that they believe the procedures listed in their internal guidance are sufficient for supervisors to perform detailed review of the negotiators’ work. However, without adequate documented supervisory review procedures, there is an increased risk that supervisors could approve rates that were not properly executed by the negotiators, as demonstrated in the examples above. Internal guidance links an agency’s objectives to its day-to-day operations, providing a framework for employees to clearly understand their roles and responsibilities when carrying out agency objectives. Standards for Internal Controls in the Federal Government states that management should develop internal guidance to help ensure that management directives are carried out, and that effective and efficient control activities are implemented to accomplish the agency’s objectives. Standards also state that information should be recorded and communicated to staff within a time frame that enables the staff to carry out their responsibilities. However, we found that CAS and ONR lack internal guidance for negotiating indirect cost rates with certain types of research organizations. CAS has not developed internal guidance that describes procedures required for negotiating indirect cost rates for hospitals, which represented $511 million in indirect cost reimbursements in fiscal year 2015. Currently, CAS uses a policy that the Department of Health, Education, and Welfare (now HHS) published in 1974 (policy OASC- 3) for establishing indirect cost and patient care rates with hospitals for grants and contracts. An HHS Office of Inspector General memorandum from 1993 recommended that the hospital cost principles be modernized and strengthened, as OASC-3 does not always provide clear guidance for determining what types of costs should be allowed and how costs should be allocated. However, HHS has not updated OASC-3, and CAS continues to rely on this policy for negotiating indirect cost rates with hospitals. According to CAS officials, they have not created internal guidance for hospitals because CAS’s national specialist—who was responsible for creating and updating internal guidance—resigned and they have not filled the vacancy. Without up-to-date and clear guidance for negotiating indirect cost rates for hospitals, there is an increased risk that negotiators will allow unallowable costs claimed by hospitals. Additionally, CAS does not have internal guidance related to the types of distribution bases allowed for small universities using the simplified method (also known as the short form) for preparing facilities and administrative cost rate proposals. According to CAS officials, they consider the internal guidance for the long form to be sufficient and have instructed negotiators to use the long form best practice manual for all university proposals. However, under the simplified method, universities are given greater flexibility in choosing between two different types of distribution bases, whereas the long form proposal requires universities to submit proposals using only a modified total direct cost base. Further, OMB guidance restricts the use of the simplified method when it produces results that appear inequitable to the federal government. Therefore, without internal guidance for proposals using the simplified method, there is an increased risk that indirect costs will not be fairly distributed to grants. For example, a distribution base that is understated will result in an inflated indirect cost rate. Although OMB has issued separate guidance specific to universities and nonprofits, ONR’s only internal guidance on negotiating indirect cost rates applies to both types of research organizations and does not address certain key differences in OMB guidance applicable to each type of research organization. Specifically, the distribution bases allowed for calculating indirect cost rates differ for universities (short and long form) and for nonprofits. For example, under the OMB guidance, universities may use only two different types of distribution bases, whereas nonprofit organizations are given greater flexibility as they are instructed to select a distribution base best suited for assigning the indirect costs to the project. However, ONR’s internal guidance applicable to both types of research organizations does not distinguish between the different provisions of OMB guidance. After we identified this issue, ONR officials acknowledged that the internal guidance needs to be updated to distinguish procedures between the different provisions of OMB guidance and reported that they are in the process of making the necessary revisions. ONR officials stated that they anticipate that the draft internal guidance will be updated and ready for final management review in June 2016. To reduce the risk that the negotiators could use an inappropriate distribution base, resulting in a rate calculation that is not in compliance with federal guidance, it is important that ONR implement and issue the internal guidance timely. However, ONR has not established a time frame for issuance of the final internal guidance. Without up-to-date internal guidance for all types of research organizations, the cognizant agencies cannot clearly delineate negotiators’ roles and responsibilities during the indirect cost rate negotiation process. Further, by not developing adequate internal guidance for all types of research organizations, cognizant agencies are at an increased risk that the negotiated indirect cost rates could include reimbursement for unallowable, unallocable, and unreasonable costs, potentially resulting in wasted federal resources. In order for each cognizant agency to achieve its objectives and comply with federal guidance and regulations, it is essential for management to have the capability to generate and review reports on indirect cost rate data. Standards for Internal Control in the Federal Government states that relevant information should be recorded and communicated with management and others within the entity who need it. This information should be in an understandable format, and provided within a time frame that enables employees to carry out their internal control responsibilities. We found that only one of the three cognizant agencies generates reports for management to view data associated with all phases of the indirect cost rate-setting process. Specifically, CAS’s system generates reports that management and the negotiators can use to determine when proposals are due; when signed rate agreements, proposals, or extensions to proposals are overdue; and other rate agreement information. In contrast, NIH-DFAS and ONR officials reported that they do not have systems capable of generating these types of reports. ONR officials stated that they do maintain a spreadsheet in Microsoft Excel for universities and nonprofit organizations, which tracks information such as whether rate agreements have been signed and whether a proposal is awaiting an audit or management review. However, the Excel spreadsheet does not identify when rate proposals are past due, when the DCAA advisory audit reports are due, or the dates when the negotiators followed up for past due proposals or DCAA advisory audit reports. Without the ability to produce reports enabling the cognizant agencies to track all phases of the indirect cost rate-setting process, cognizant agencies cannot reasonably assure compliance with federal guidance and regulations and accomplishment of agency objectives. For example, in accordance with federal guidance and regulation, a research organization must submit its indirect cost rate proposal to the cognizant agency within 6 months after the close of the fiscal year. However, we found one instance in which an ONR research organization had not submitted its final indirect cost rate proposal, which was due in December 2014. As of January 2016 (13 months overdue), the organization had still not provided the final indirect cost rate proposal to ONR. In June 2016, upon further inquiry by us, ONR officials stated that the research organization did not submit a rate proposal because it had gone out of business. When indirect cost rate proposals are not submitted in a timely manner, the cognizant agencies are also unable to close out grants and can incur additional costs to the government, leading to potential waste. Effective management reporting tools are therefore critical for enabling management to make better decisions and meet agency objectives and goals for effective and efficient use of resources. Grants are an important form of federal financial assistance that NIH uses to carry out its mission as the nation’s leading sponsor of biomedical research. NIH reimbursed $6.3 billion in indirect costs in fiscal year 2015, representing over a quarter of the total amount that NIH awarded through grants and cooperative agreements. Because the designated cognizant agencies negotiate the indirect cost rates on behalf of all federal agencies, including NIH, it is critical that the cognizant agencies adequately design internal controls to reasonably assure that their indirect cost rate negotiation process run efficiently and effectively and reduce the risk that federal resources may be subject to fraud, waste, and abuse. Although the three cognizant agencies had established controls for the indirect cost rate-setting process, we identified deficiencies in the design of some of these controls. Until the three cognizant agencies update their internal guidance to reflect current regulations and agency procedures and include instructions to supervisors about their review responsibilities, there is an increased risk that taxpayer funds will not be adequately protected. In addition, ONR lacks needed guidance to address delayed audits, audits that have qualified opinions, and audits where previously issued opinions are rescinded. Also, both CAS and ONR lack internal guidance to address differences in negotiating for all research organization types. Finally, ONR and NIH-DFAS are missing mechanisms to track key milestones for indirect cost rate-setting information. If these deficiencies are not addressed, there is an increased risk that the cognizant agencies may not properly and consistently negotiate indirect cost rates and that the rates negotiated may not comply with applicable federal regulations. Thus, there is an increased risk that the indirect cost rates used to reimburse NIH research organizations will include costs that are not allowable, allocable, and reasonable and may result in wasted federal resources. To improve the design of internal controls over the indirect cost rate- setting process, we recommend that the Director of CAS take the following four actions: establish internal controls to periodically review and update internal guidance when changes are made to applicable regulations to reasonably assure the guidance reflects current requirements; develop a standardized checklist and document procedures in its internal guidance instructing negotiators to use the checklist during negotiation; develop detailed internal guidance for the completion and documentation of supervisory review of the indirect cost rate negotiation process to provide reasonable assurance that key control activities have been performed by the negotiators; and develop internal guidance for negotiating indirect cost rates with all types of research organizations, including hospitals, as well as universities using the simplified method. As NIH-DFAS begins formalizing its internal guidance, we recommend that the Director of NIH-DFAS take the following three actions: update internal guidance to include key characteristics, such as policy number, purpose of the policy, effective date, and approving official, that are normally included in formal policy and procedures; develop detailed procedures for the completion and documentation of supervisory review of the indirect cost rate negotiation process to provide reasonable assurance that key control activities have been performed by the negotiator; and establish a mechanism for tracking key milestones in the indirect cost rate-setting process, such as when indirect cost rate proposals are due. To improve the design of internal controls over the indirect cost rate- setting process, we recommend that the Director of ONR take the following five actions: implement the May 2014 policy requiring an annual review of guidance so that internal guidance is updated when changes are made to applicable regulations and procedures to reasonably assure that the guidance reflects current requirements; include in its internal guidance acceptable DCAA audit completion time frames and identify supplemental procedures to be performed by negotiators if DCAA cannot perform its audits timely or if DCAA issues a qualified opinion or rescinds one of its previously issued audit opinions, to reasonably assure that the indirect cost rate proposal has been adequately reviewed and the negotiated rate complies with applicable regulations; develop detailed procedures for the completion and documentation of supervisory review of the indirect cost rate negotiation process to provide reasonable assurance that required certifications and assurances are obtained and follow-up with the research organization is documented; finalize and issue internal guidance for negotiating indirect cost rates with universities and nonprofit organizations, including establishing a time frame for issuance of the internal guidance, to help ensure that the procedures are implemented in a timely manner; and update ONR’s existing process for tracking key milestones in the indirect cost rate-setting process to include information such as when indirect cost rate proposals are overdue and when DCAA’s audit reports are due. We provided a draft of this report to HHS, DOD, and OMB for their review and comment. In written comments, reprinted in appendixes II and III, HHS concurred with our recommendations and provided information on actions planned or under way to address them and DOD concurred with four of our five recommendations and partially concurred with one. Our report did not include recommendations to OMB, and the OMB Liaison to GAO responded in an e-mail that OMB did not have any formal comments on the report. HHS, DOD, and OMB each provided technical comments, which we incorporated as appropriate. In response to the recommendations directed toward CAS, HHS said that by December 31, 2016, CAS will establish a written procedure requiring periodic reviews and updates of internal guidance whenever changes are made to applicable regulations, update and complete standardized checklists for each type of indirect cost review and instruct the staff to use the checklists, establish and implement standardized procedures for supervisory review of workpapers and rate agreements, and update internal guidance for negotiating indirect cost rates with universities using the simplified method. HHS also stated that CAS would develop internal guidance for negotiating with hospitals as soon as possible. In response to the recommendations directed toward NIH-DFAS, HHS said that by December 31, 2016, NIH-DFAS will update internal guidance to include key characteristics that are normally included in formal policies and procedures and develop detailed procedures for completing and documenting supervisory review of indirect cost rate negotiations. HHS also stated that NIH-DFAS will establish a mechanism for tracking key milestones in the indirect cost rate-setting process. HHS said that NIH- DFAS is currently looking into the feasibility of incorporating key milestones into two major initiatives, and if it is unable to do so, NIH- DFAS will develop an alternative tracking system by March 31, 2017. HHS’s CAS and NIH-DFAS actions, if implemented effectively, would address our recommendations. In response to the recommendations directed toward ONR, DOD concurred with four of our five recommendations. For the four recommendations that it concurred with, DOD stated that ONR will comply with its requirement for an annual review of its internal guidance and update its internal guidance to provide more realistic DCAA audit report dates, including general procedures for negotiators to perform in the case of untimely audits and qualified or rescinded opinions. Additionally, DOD said that ONR will update its internal guidance for negotiating indirect cost rates with universities and nonprofit organizations by December 31, 2016, and will update its existing processes for tracking key milestones to include information such as due dates for rate proposals and DCAA audit reports. If implemented effectively, these actions would address the four recommendations. In response to the recommendation to develop supervisory review procedures, DOD partially concurred. First, DOD disagreed that a certification declaring that federal funds were not used to subsidize industry and foreign government programs is required. DOD cited 2 C.F.R. pt. 200, appendix. III (formerly located in OMB Circular A-21, 2 C.F.R. pt. 220, app. A prior to December 2014), which states that each institution must describe the process it uses to ensure that federal funds are not used to subsidize industry and foreign government programs, and stated that a certification is only one way to accomplish this. While we agree there may be different ways to meet this requirement, ONR’s submission requirements checklist states that institutions must complete and submit several certifications and assurances, including a Statement of Assurance attesting that federal funds are not used to subsidize industry and foreign government programs.The Statement of Assurance specifically cites the C.F.R. provision. Consequently, ONR has chosen to use the Statement of Assurance to meet the C.F.R. provision, and we found no other documentation that would otherwise meet the requirement. We revised the body of the report to avoid suggesting that OMB requires a certification and to specifically cite the Statement of Assurance, and similarly revised the recommendation for clarification, but our overall finding stands.Therefore, we continue to believe that ONR needs to develop supervisory review procedures to reasonably assure that these Statements of Assurance are obtained. Further, DOD did not agree that ONR lacks procedures to ensure supervisors confirm that negotiators adequately performed and documented key controls. DOD noted that both the primary and secondary supervisors are required to review and approve the Business Clearance Memorandum, which records steps performed by the negotiator. While we agree that the Business Clearance Memorandum documents steps performed by the negotiator, these steps are documented at a high level and do not include detailed procedures for supervisors to follow to reasonably assure that the negotiator has performed and documented all key control activities, such as obtaining all required certifications and assurances. DOD agreed in its response that ONR’s Business Clearance Memorandum can be improved and stated that ONR will update it to require the negotiator to cross-reference the review steps to the proposal to facilitate the supervisor’s review process. However, it is not clear whether the planned Business Clearance Memorandum revisions will include providing detailed procedures for supervisory review as we recommended. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 21 days from the report date. At that time, we will send copies to the Secretaries of Health and Human Services and Defense and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who make key contributions to this report are listed in appendix IV. The objective of this review was to determine the extent to which the cognizant agencies have designed internal controls to mitigate fraud, waste, and abuse in the indirect cost rate-setting process for National Institutes of Health (NIH) grants. Generally, the agency that provides the most funding to a particular research organization has rate-setting cognizance for that organization. However, the Office of Management and Budget (OMB) and the Department of Health and Human Services (HHS) have designated three primary cognizant agencies to negotiate indirect cost rates for grants funded by NIH: (1) HHS’s Cost Allocation Services (CAS), (2) NIH’s Division of Financial Advisory Services (NIH-DFAS), and (3) the Department of Defense’s (DOD) Office of Naval Research (ONR). Specifically, OMB guidance (OMB Circular No. A-21, Cost Principles for Educational Institutions, whose provisions have been incorporated in OMB’s Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance)) specifically assigns responsibility for negotiating rates for universities to either HHS’s CAS or DOD’s ONR. OMB guidance (OMB Circular No. A-122, Cost Principles for Non-Profit Organizations, whose provisions have been incorporated in the Uniform Guidance) assigns responsibility for negotiating rates for research institutes to the cognizant agency with the largest dollar value of grants with an organization, unless different arrangements are agreed to by the agencies concerned. HHS Acquisition Regulations Part 342, Contract Administration, assigns the responsibility for negotiating indirect cost rates for hospitals to HHS’s CAS. Furthermore, the Secretary of HHS authorized NIH-DFAS to have HHS- wide cognizance of the indirect cost rate negotiation function with for- profit grantees, which are not covered by the OMB circulars on grant cost principles. Therefore, we focused our review on the three primary cognizant agencies: (1) CAS, (2) NIH-DFAS, and (3) ONR. rates for the four types of research organizations reviewed and corresponding federal guidance and regulations. for these key characteristics and determined, through the inclusion of the characteristics, whether the policies would be considered formal. We obtained NIH grant funding data for extramural research for fiscal year 2014 and performed procedures to determine whether the data were reliable enough for our purposes. Specifically, we interviewed knowledgeable agency officials about the quality control procedures the agency had in place when collecting and creating the data and tested the data for unusual items. Based on the results of these procedures, we determined that the data were reliable enough for our purposes. To further our understanding of the design of internal controls at the three cognizant agencies, we selected a nongeneralizable sample of negotiation case files and performed walk-throughs for each type of research organization selected for our review. To select the nongeneralizable sample, we took the total population of grants funded for fiscal year 2014 and stratified the population by funding dollars (i.e., high, medium, and low) for each type of research organization and each cognizant agency. Finally, we selected six case files from each of the populations (see fig. 6). obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact named above, Kim McGatlin (Assistant Director), Rathi Bose, Francine DelVecchio, Wilfred Holloway, Alec Hulbert, Diana Lee, Sophie Geyer, Jason Kelly, Kevin McAloon, and Kailey Seibert made key contributions to this report. | NIH spent over $23 billion to support extramural research in 2015, including $6.3 billion for indirect costs, which are costs not directly attributable to a specific research project or function, such as utilities expenses, for grants and cooperative agreements. NIH relies on designated cognizant agencies to design adequate internal controls over the processes for negotiating indirect cost rates with research organizations. Once set, these rates must be accepted by NIH and all federal awarding agencies. GAO was asked to review the internal controls for overseeing the validity of indirect cost rates for NIH's research organizations. This report examines the extent to which the three primary cognizant agencies (CAS, NIH-DFAS, and ONR) that set indirect cost rates on NIH's behalf have designed internal controls to mitigate the potential for fraud, waste, and abuse in the indirect cost rate-setting process. GAO reviewed OMB guidance, the FAR, and the cognizant agencies' internal guidance on indirect cost rate negotiation; interviewed staff at the cognizant agencies; and reviewed a nongeneralizable sample of negotiation case files to obtain an understanding of the design of controls. Research organizations that apply for National Institutes of Health (NIH) funding participate in an indirect cost rate-setting process, which involves submitting a rate proposal; reviewing the proposal and having it audited by a third-party agency at the cognizant agency's request; and finalizing (negotiating) the rate for the organization. The Office of Management and Budget (OMB) and the Department of Health and Human Services (HHS) have designated three primary cognizant agencies to set indirect cost rates for federal financial assistance funded by NIH: HHS's Cost Allocation Services (CAS), NIH's Division of Financial Advisory Services (NIH-DFAS), and the Department of Defense's (DOD) Office of Naval Research (ONR). These cognizant agencies are responsible for ensuring that negotiated indirect cost rates comply with OMB guidance and the Federal Acquisition Regulation (FAR), as applicable. GAO found that while the three agencies had designed controls for setting indirect cost rates, deficiencies in the design of some of these controls could result in the waste of federal resources. The deficiencies GAO identified are as follows: None of the three agencies has updated its internal guidance to reflect current OMB guidance or changes in agency requirements, such as documentation requirements. ONR relies on audits by the Defense Contract Audit Agency to ensure the adequacy and compliance of indirect cost proposals that it processes, but ONR has not included acceptable time frames in its internal guidance for when these audits are to be completed or what steps are to be taken when audits result in qualified opinions or when a prior audit opinion is rescinded. All three cognizant agencies' internal guidance lacks detailed instructions to supervisors on their review responsibilities over the indirect cost rate process. CAS and ONR have not developed internal guidance addressing differences in negotiating indirect cost rates with certain types of research organizations, such as universities and hospitals. ONR and NIH-DFAS have not developed mechanisms to track key milestones for the indirect cost rate-setting process. If these deficiencies are not addressed, there is an increased risk that these cognizant agencies may not properly and consistently negotiate indirect cost rates and that the rates negotiated may not comply with applicable federal regulations. Thus, there is an increased risk that the indirect cost rates used to reimburse NIH research organizations will include costs that are not allowable, allocable, and reasonable, and may result in wasted federal resources. GAO is making 12 recommendations to improve controls over their indirect cost rate process. HHS concurred with GAO's 7 recommendations to CAS and NIH-DFAS and described ongoing and planned actions to address them. DOD concurred with 4 recommendations to ONR and partially concurred with 1. GAO continues to believe that action is needed, as discussed in the report. |
The Fair Housing Act is the most comprehensive of the federal statutes that prohibit discrimination in the rental and sale of housing. Passed in 1968 and amended in 1988, the Act prohibits discrimination on the basis of color, family status, handicap, national origin, race, religion, and sex. It applies to a number of what are termed “issues,” including discrimination in the sale, rental, advertising, and financing of housing; in the provision of brokerage services; and in other activities related to residential real estate transactions. Generally, the Act covers all dwellings—that is, buildings designed to be used wholly or in part as residences and land where a dwelling will be located. When first enacted in 1968, the Fair Housing Act’s administrative enforcement process was limited principally to conciliation. In 1988, Congress strengthened HUD’s authority and established a comprehensive administrative process to enforce the law, but conciliation remained a primary feature. The Act gives HUD, private persons, and the U.S. Attorney General tools and remedies to enforce the antidiscrimination provisions. Using HUD’s administrative process, individuals who believe they have experienced discrimination in a housing-related situation can file a complaint that HUD may then investigate and resolve. Individuals may also elect to file suit in civil court rather than using the administrative procedure set out in the act. The Attorney General can bring a civil action in cases that show a pattern of discriminatory practices. FHEO has staff in each of HUD’s 10 regional offices, or hubs, who respond to complaints (see fig. 1). Agencies certified to participate in HUD’s Fair Housing Assistance Program (FHAP) and receive funding from HUD for handling fair housing complaints are obligated to comply with FHEO’s reporting and record maintenance requirements, must agree to on-site technical assistance provided by HUD, and are obligated to implement certain policies and procedures. FHAP agencies must be in states or localities whose laws provide rights and remedies that are substantially similar to those in the Act—for example, local laws must provide for the same 100-day benchmark for investigations that is stipulated in the Act. FHEO offices refer complaints alleging violations of state and local fair housing laws to FHAP agencies—for example, a certified state office of civil rights. Currently, there are 100 of these agencies around the country. FHEO staff has responsibility for the intake, investigation, and resolution of some of these complaints. Aggrieved persons may also go directly to FHAP agencies, which then perform the intake process. If an aggrieved party contacts a FHEO office regarding discrimination that allegedly occurred in a state or locality that has a FHEO-certified “substantially equivalent” state or local agency (that is, a FHAP agency), FHEO will complete the intake process and refer the complaint to that agency for enforcement. In 2004, FHEO and the FHAP agencies received approximately 9,500 filed complaints (see fig. 2). For this same period, only 5 percent of the closed case files resulted in reasonable cause outcomes. HUD reimburses FHAP agencies for carrying out investigations once FHEO has reviewed the completed cases. Along with reviewing cases to determine whether HUD should pay for services rendered, FHEO monitors FHAP agencies and provides technical assistance. FHEO monitors the fair housing enforcement efforts through TEAPOTS. Our last report identified a number of human capital challenges facing FHEO, including the number and skill level of FHEO staff, the quality and effectiveness of training, and other issues. An FHEO official noted that the staff shortage affected not only enforcement of the Act, but also FHEO’s other responsibilities, forcing managers to assume heavier caseloads and professional staff to perform administrative duties rather than concentrating on the complaint process. The total number of full-time equivalents (FTE) in FHEO has fluctuated over the last 10 years, falling from a high of 750 in fiscal year 1994 to a low of 579 in fiscal year 2000. In fiscal year 2004, FHEO had 650 FTEs. The complaint process beginning at intake represents the initial contact a complainant has with an agency responsible for enforcing the Act or equivalent state law. Figure 3 describes the complaint process for HUD-investigated complaints. FHAP agencies would follow a similar process. In the intake stage, FHEO hubs and FHAP agencies receive inquiries by telephone, fax, mail, in person, or over the Internet. Intake staff record inquiries in TEAPOTS, interview complainants, and may do other research—for example, searches of public records—to see if enough information exists to support filing a formal complaint. This process is known as “perfecting” a complaint. In order to be perfected, a complaint must contain the required four elements of a Title VIII complaint: the names and addresses of the person alleging the discriminatory practice and the respondent, a description and the address of the dwelling involved, and a statement of the facts leading to the allegation; and satisfy the Act’s jurisdictional requirements that the complainant has standing to file the complaint; that the respondent, dwelling, subject matter of discrimination (e.g., refusal to rent or sell) and the basis (e.g., race, color, or familial status) for the alleged discrimination are covered by the Act; and that the complaint has been filed within a year of the last occurrence of the alleged discriminatory practice. Hub directors decide which complaints meet these criteria and become perfected complaints. Complaints that do not meet the criteria are dismissed. Intake staff record information about perfected complaints in TEAPOTS, have complainants sign the complaints, send letters notifying complainants and respondents about the complaint and the process that will be used to address it, and send the complaint file to an investigator. FHEO’s Title VIII Intake, Investigation, and Conciliation Handbook (Handbook) sets a 20-day benchmark for completing the intake stage for these cases, but a 5-day benchmark for cases that it first takes in and then refers to FHAP agencies. Complaints that are perfected proceed to an investigation. During this stage, FHEO and FHAP agencies gather evidence to determine whether a violation of the Act or a state or local housing law has occurred or is about to occur. The Handbook provides guidance for investigators but notes that investigations may vary. Agency guidance directs that directors of FHEO’s hub offices review the results of completed investigations to determine whether reasonable cause exists to believe that a discriminatory housing practice has taken place or could take place. With the concurrence of the relevant HUD regional counsel, the hub director issues a determination and directs the regional counsel to issue a charge, or short written statement of facts, that led to the decision. In a March 6, 2003 memorandum, HUD’s Office of General Counsel (OGC) in headquarters requested that regional counsels send OGC’s Office of Fair Housing the final draft of any charge that they propose to file and that they not file charges until they have received a response from OGC’s Office of Fair Housing. Figure 3 provides an overview of HUD’s basic fair housing complaint process, including timeliness benchmarks established by the Act or agency guidance. An investigation can be closed at any point for administrative reasons or through conciliation. Cases are closed administratively for several reasons—for instance, when a complainant withdraws from the case or cannot be located. The Act requires HUD to make conciliation efforts throughout the complaint process, beginning when the complaint is filed and continuing until the charge is filed or the case dismissed. The Handbook and federal regulations allow investigators to make conciliation efforts, but the regulations also state that generally officers, employees, and agents not associated with the case will attempt conciliation. Conciliation agreements are intended to protect the public interest through provisions such as requiring respondents to file periodic reports with HUD. When a conciliation agreement is reached, the Act authorizes the Department of Justice to enforce the agreement in the event of a breach. In 2004, FHEO hubs and FHAP agencies closed about one-third of their cases via conciliation. The Act set a deadline of 100 days from the date the complaint is filed for completing an investigation or conciliating or otherwise closing a case, unless doing so is “impracticable.” If the investigation cannot be completed within this time frame, FHEO or the FHAP agency must notify the complainant and respondent in writing in what is called the “100-day letter.” In our previous report, we found that the number of investigations completed within 100 days by the FHEO or FHAP agencies increased significantly after 2001, partly in response to FHEO’s initiative to reduce aged cases. Although often responding promptly and providing useful guidance and information, our test calls and analysis of contact logs found that FHEO and FHAP agencies were not always thorough or timely in carrying out intake activities. Our test calls, while not generalizable, suggest that potential complainants may have difficulty in making initial contact with an intake staff person; moreover, 30 percent of the complainants we surveyed reported such difficulty. Our test calls also showed that FHEO and FHAP agency staff sometimes did not seek information needed to determine whether a potential violation of the Act had taken place and to file a formal complaint, or gather limited information that might help the agency recontact the complainant or assess the urgency of the situation. Among the logged contacts that the agencies determined were potential violations of the Act and were recorded in TEAPOTS, half resulted in formal complaints. However, only 57 percent of these completed the process within the 20-day benchmark. Additionally, missing or inconsistent data suggests that TEAPOTS may be of limited usefulness as a management control over the intake process. The intake process is the first contact prospective complainants have with the agencies responsible for enforcing the Act or an equivalent state law. Depending on the quality of intake, potential complainants may or may not feel comfortable continuing the process, and those who do not may give up on pursuing their complaints. Thus, the agency’s initial response to complainants plays an important role in the fair housing complaint process. However, our test calls revealed some potentially serious lapses in agencies’ responses to complainants’ inquiries. First, we found that agencies did not always respond promptly to initial attempts to contact them to file a complaint and that because of requirements that some agencies imposed, trying to file a complaint could be a challenging process. In 5 of the 46 calls, the agency did not return the test call, even after 3 attempts. In another 2 cases, the intake organization required that the caller provide intake information via the Internet or in person. As shown in figure 4, in 20 of the remaining 39 test cases, the caller spoke with a live person on initial contact. Of the 9 calls requiring a callback, 6 were not returned within 1 business day, and 3 were not returned for 3 or more days. Our survey of complainants suggests that they experienced similar difficulties to ours in contacting intake staff. An estimated 30 percent noted that it was either somewhat or very difficult to reach a live person the first time they contacted a fair housing agency, and 34 percent said they had difficulty contacting staff after the initial contact. These percentages were relatively constant regardless of whether FHEO or a FHAP agency handled the case or its outcome, with one exception. Complainants whose cases were conciliated reported that they had less difficulty contacting staff than complainants whose cases were closed with other outcomes. We also found that intake staff did not seem to display a sense of urgency in dealing with complaints. Over half of the agencies (23 of 39) relied primarily on a form that the complainant must fill out (HUD-903 or state equivalent) to collect the information needed to begin an investigation, and in the initial phone call requested little more than the complainant’s name and mailing address. Using such a form to gather information for a potential complaint could take a week or more—during which the caller could lose a housing opportunity. Two other agencies would not mail a complaint form, insisting that the caller come in to the office to file a complaint. However, information from contact logs that the FHAP agencies and FHEO offices maintained for 4 weeks, at our request, showed that the most prevalent mode of contact is telephone, and that walk-in and Internet contacts represented less than 5 percent. Given this situation, requiring potential complainants to appear in person added an additional challenge that could potentially make it difficult for a complainant to continue with the process. Further, a test caller to one of these agencies, stressing the urgency of her situation, was informed that filing a complaint was a “slow process” and that her complaint would not be acted on for some time, whether intake was done over the phone or via the organization’s form. FHEO’s annual performance goals do not include goals for the time it takes to return initial contacts from complainants. However, FHEO has established a 20-day benchmark for completing the intake process, starting with the date that the initial inquiry is recorded in TEAPOTS. In commenting on a draft of this report, HUD’s General Deputy Assistant Secretary for Fair Housing and Equal Opportunity stated that the agency tracks the time it takes to file a complaint from the point of initial contact, and that a new initiative, the FHEO-OGC Case Processing Research Project, is expected to assist with decision making during the intake process since it uses a triage system to determine case complexity. Despite any inconveniences, when our test callers did reach the agencies, the staff treated them well. In none of our test cases did hold time exceed 3 minutes, and staff at several agencies spoke extensively with test callers, answering questions and providing guidance and information on the process. Our survey of former complainants that completed the investigation process showed similar findings. While complainants had difficulty reaching an agency, once they did, more than half said that agency staff did either a good or an excellent job of explaining the process and timing of each step. When collecting intake information during our test calls, FHEO and FHAP agency staff focused primarily on collecting the complainant’s name, address, and protected class, as well as a description of the discriminatory act. Staff sometimes did not ask for other information that would be helpful in recontacting the complainant or assessing the urgency of the situation. To systematically assess the thoroughness of the intake test calls, we identified criteria from the Act (the minimum elements of information needed to proceed with the complaint), HUD’s Title VIII Handbook, and training materials from the National Fair Housing Training Academy. Additionally, we obtained information on best practices from a fair housing advocacy group as well as HUD’s training materials and interviews with agency officials. We categorized these criteria at four levels: Level 1—information that, according to HUD policy, should always be collected during intake, though not necessarily during the first contact, regardless of the basis of the complaint or the protected class. Level 2—information that is potentially applicable to all complaints and that should be collected during the intake process. Level 3—information that is relevant to a particular basis or protected class—that is, information necessary to determine, for example, whether the complainant met a certain protected class (e.g., handicap or familial status). Level 4—information that is considered to be a best practice—for example, information that may be used for testing. We also discussed these criteria and the designated levels with agency officials. We measured the percentage of information elements collected at each of these levels during our test calls. The elements associated with each level, and the results we observed during our test calls, are shown in figure 5. While level 1 information should always be collected in order to proceed with a complaint, some of this information may not always need to be collected during the initial contact. However, HUD policy recommends that staff obtain as much information as possible during the initial intake interview. We also believe that level 1 items should, with little exception, be collected as part of the initial contact in order to have information necessary to recontact the complainant and determine the urgency of the situation. How complinnt knew of thi property/nit Doe client wnt nit/property? For rentnit, did complinnt submit ppliction? Wht otcome doe the complinnt hope to ee result from the complint? Did complinnt sugget to repondent thn invetigtion might commence? Did complinnt diuss thi complint with ny other orgniztion? Disability of complinnt (if ny) How did complinnt feel abt the dicrimintory ct? Was complinnt informed abt other check/inqrie? Was complinnt informed tht credit check wold e done? Wht was complinnt wering when ct occrred? Was complinnt informed tht previousndlord wold e contcted? On average, intake staff collected approximately 44 percent of level 1 information, which is identified by HUD policy as critical to collect during intake. For example, 21 agencies did not ask for the respondent’s first and last name, and 15 did not ask for the respondent’s organization. As indicated in figure 5, in only 8 agency test cases did intake staff ask the complainant for the name of an alternate contact person and in only 7 agency test cases did the intake staff request a work number—an important piece of information because, according to FHEO officials, many complainants are in the process of moving and are difficult to recontact. Also, intake staff about half of the time asked for the respondent’s telephone number. Further, the agencies collected little information beyond level 1. On average, they gathered 8 percent of level 2, 11 percent of level 3, and 9 percent of level 4 information. Some level 4 (best practice) information that was most often collected during our case study includes the respondent’s gender (13 test calls), respondent’s race (9 test calls), and what the complainant hoped to see as a result of filing a complaint (8 test calls). (See fig. 5.) Although HUD policy recommends that intake staff obtain as much information as possible regarding the aggrieved person’s allegations during the initial intake interview, the amount of information collected on initial contact varied significantly by agency. One location collected nothing beyond the complainant’s name and mailing address, while another collected up to 80 percent of the level 1 data. However, in none of the test calls did intake staff collect 100 percent of the level 1 information. While most agencies appeared to collect the remainder of the critical information through their intake forms (either the HUD-903 form or state equivalent), this practice prevents the agency from taking any further action on the complaint until a signed form is received. According to HUD officials, the revised Title VIII Handbook contains standards for information that HUD will collect during initial contact with a complainant. However, these and other standards may not apply to FHAP agencies since their certification by HUD does not ensure that they follow identical procedures. The time it takes to receive the form can delay the enforcement process, potentially resulting not only in the loss of a housing opportunity but also in complainants becoming frustrated with the process and deciding not to pursue their complaint. In particular, some complainants may need urgent attention, such as when they are about to become homeless because they are being evicted from their rental home or apartment, or are losing the home they own in a foreclosure. HUD has authority under the Act in such urgent circumstances to take prompt judicial action by authorizing the Attorney General to initiate a civil action seeking appropriate temporary or preliminary relief pending final disposition of the complaint. HUD’s Title VIII Handbook establishes that intake staff has a critical role in identifying when a complaint may involve a situation warranting prompt judicial action. Time is critical, and the efforts of the intake staff are helpful in gathering sufficient information for determining when prompt judicial action may be necessary. This is also important for intake by the FHAP agencies, since they must have substantially equivalent authority to seek prompt judicial action. Our prior report noted that while FHEO offices kept records of potentially Title VIII-related contacts, it had not required FHAP agencies to do so. (For that reason, when preparing our prior report, we were unable to determine the extent to which FHAP agencies met the goal of perfecting complaints within 20 days of initial contacts.) FHAP agencies typically entered information only for perfected complaints. Accordingly, we recommended that HUD ensure that the automated case-tracking system (TEAPOTS) include complete, reliable data on key dates in the intake stage for FHAP agencies. The latest version of HUD’s Title VIII Handbook (issued in May 2005) requires FHEO intake staff to record in TEAPOTS each inquiry that is potentially Title VIII-related, regardless of whether it results in a perfected complaint. Our comparison of information from the logs—that intake centers kept, at our request, during February and March 2005—with TEAPOTS data highlights the need for reliable information regarding potentially Title VIII-related contacts and the dates of their occurrence. First, the analysis showed that a substantial number of potentially Title VIII-related contacts (68 percent) were not entered in TEAPOTS, and of those that were, about half resulted in perfected complaints. While there are valid reasons for this “attrition,” our results suggest that staff are not recording in TEAPOTS a substantial number of potentially Title VIII-related contacts (inquiries in which the caller alleges housing discrimination and intake staff believe the call represents a potential Title VIII violation). Further, we found that, for those contacts that were entered into TEAPOTS, the initial contact dates shown in the logs were sometimes earlier than the corresponding date of first contact shown in TEAPOTS. Thus, HUD’s use of TEAPOTS data is likely overstating its performance in meeting its 20-day intake timeliness benchmark. Without assurance that TEAPOTS is being used consistently, HUD is unable to account for potentially Title VIII-related contacts that do not appear in the system or accurately measure timeliness of those that are recorded in the TEAPOTS system, thus limiting TEAPOTS’ utility as a management control. To determine the volume of intake-related contacts received by fair housing agencies and the proportion of these contacts that resulted in perfected complaints, we had 47 sites record all incoming contacts—telephone calls, walk-ins, and Internet queries—that were related to fair housing over a 4-week period (February 22 through March 21, 2005). The 32 state FHAP agencies, 5 local FHAP agencies, and 10 FHEO offices that participated in the log exercise represented 78 percent of the volume of investigations in 2004. Specifically, we asked the sites to record the date, method, and purpose of each contact; the name of the person making the contact; whether the contact alleged having experienced housing discrimination; and whether the intake staff agreed that the matter pertained to a potentially Title VIII-related complaint. During our tracking period, the sites recorded a total of 9,655 contacts. As shown in figure 6, the majority (80 percent) of contacts for which we had complete data was by telephone, and 42 percent were new potential complaints. Furthermore, a sizable number of initial contacts (approximately 24 percent) did not pertain to fair housing (confirming, as our prior report noted, that intake analysts receive numerous contacts that are not related to fair housing.) The time necessary for handling the calls can place an additional burden on FHEO’s limited resources. Our review of the log also showed that a sizable number of new potential complaints that intake staff believed could involve a Title VIII violation did not result in perfected complaints in TEAPOTS (see fig. 7): Agency staff coded 2,000 contacts as coming from a named individual whose allegations they believed both involved a new potential complaint and pertained to a potentially Title VIII-related violation. Of these 2,000 individuals, we were able to match 631 to a new inquiry shown in TEAPOTS. Of the 631 inquiries, 306 are shown in TEAPOTS as perfected complaints. We sorted these data into two groups: contacts recorded by FHEO offices and contacts recorded by FHAP agencies. We found that the attrition rates differed somewhat between these groups. At FHEO sites, intake staff identified 1,347 unique individuals as having potentially valid new Title VIII complaints. Of these, 506, or 38 percent, were shown as unique inquiries in TEAPOTS, and 216 of those—16 percent of the original 1,347—resulted in perfected complaints shown in the automated system at the time of our analysis. At the same time, FHAP sites identified 620 unique individuals with potential new Title VIII violations, of which 92, or 15 percent, were shown as unique inquiries in TEAPOTS and 66, or 11 percent, became perfected complaints. Attrition can occur for a number of reasons. For example, because some state laws contain additional protected classes, some calls that FHEO offices receive may be for matters that are covered under a FHAP agency’s jurisdiction. Also, intake staff may believe that a contact pertains to a valid Title VIII violation at the outset but may later find out that the respondent is exempt from Title VIII or that the 1-year statute of limitations has expired. Furthermore, the intake process is sometimes terminated because a complainant either does not cooperate with agency staff or resolves the issue with the respondent and voluntarily discontinues the complaint process. Finally, it is possible that, during our process of matching names in the log to TEAPOTS records, a small number of matches were not made, either due to misspellings of names or timings of entry into the system. Information in TEAPOTS provides some insight as to the reasons why inquiries were not perfected during this time period. In 20 percent of all inquiries for which TEAPOTS had data, the complainant failed to respond; in 13 percent of the cases, the intake staff found no valid basis for complaint; and in 8 percent of the cases, the intake staff found no valid issue. However, TEAPOTS shows that 43 percent of the inquiries that were not perfected were coded as “Other Disposition,” which means that no further information is available to indicate why the contact did not result in a perfected complaint. In commenting on a draft of this report, HUD’s General Deputy Assistant Secretary for Fair Housing and Equal Opportunity stated that while HUD considered requiring FHAP agencies to record initial inquiry dates for all potential complaints, it has not done so because of the way HUD funds FHAP agencies. Specifically, HUD reimburses the FHAP agencies for each (actual) complaint that they investigate, and does not reimburse them for their consideration of inquiries that do not result in complaints. However, this continues to leave HUD without data or knowledge of a significant number of potential Title VIII-related inquiries or a means of assessing the FHAP agencies’ response to such inquiries. Our prior report questioned the reliability of initial inquiry dates shown in TEAPOTS. Our analysis for this report raises further questions, because we found differences in the dates recorded in the contact logs and those in TEAPOTS for the same contacts. HUD policy requires that FHEO offices perfect or close all inquiries within 20 days of initial contact. Statistics generated by HUD show an approximate 95 percent compliance with this policy. However, for internal measurement and benchmarking purposes, HUD begins counting the 20 days on the day the inquiry is entered into TEAPOTS. Because HUD does not track the actual initial contact dates, it can not use them to begin measuring the 20-day period. Our analysis of the contacts recorded in the logs leading to the 306 perfected complaints noted above indicates that 57 percent of the complaints were perfected within 20 days of initial contact, based on the earliest contact date in the log. However, using the inquiry date in TEAPOTS as the starting point for the 20-day benchmark, as HUD does, indicates that 79 percent of the complaints were perfected within 20 days. In fact, the median number of days to perfect complaints was 11 using TEAPOTS inquiries, but 18 using the data recorded in our log. These results indicate that HUD lacks an accurate picture of how much time individuals face from the day they make an inquiry to the day they learn the outcome of their cases, and that HUD’s reliance on TEAPOTS data leads to an inaccurate assessment of performance in meeting its timeliness benchmark. In reviewing investigative case files and associated TEAPOTS records, we found that some lacked evidence that required investigative standards were met, investigators followed recommended planning and procedure guidelines, or that internal control measures were used and documented. Throughout this section, we present estimates of agency compliance with certain requirements and recommended practices based on our review of a random sample of 197 FHAP agency and FHEO investigations that were closed during the last half of 2004 either administratively, through conciliation, or with a finding of no reasonable cause. Unless otherwise noted, these estimates are surrounded by a confidence interval, due to sampling error, of plus or minus 8 percentage points or smaller. We also present results from our review of 12 of the 15 FHEO cases that completed the adjudication process and subsequent monitoring during the same period. Our review of each case was limited to reviewing the contents of case files and the associated TEAPOTS records (that is, we did not interview case investigators, other officials involved in the case, complainants, or respondents), and it is important to note that the lack of evidence we found does not necessarily indicate that required or recommended steps were not taken. However, the lack of evidence does raise questions about HUD’s ability to assure that investigations are as thorough as they need to be. The Act sets several standards for investigators to follow during the complaint process. First, investigators must establish and document four jurisdictional elements to ensure that the complaint is covered under the Act. Second, certain notifications to the complainants and respondents must be sent and received. Third, a Final Investigative Report (FIR) must be prepared at the end of each investigation. As a practical matter, this means a FIR is required for investigations that conclude with a determination of reasonable cause to believe a violation has occurred, and for investigations closed with a determination of no reasonable cause (i.e., there is no reasonable cause to believe a violation has occurred). Investigators must also meet a 100-day deadline for completing an investigation unless it is impracticable to do so. If the investigation is not completed in 100 days, complainants and respondents should be notified in writing of the reasons. Our review of investigative case files completed during the last half of 2004 showed that the elements of jurisdiction we measured were addressed in nearly all of the no-cause cases (see fig. 8). Similarly, we observed that with one exception, all of the cause case files we reviewed addressed the elements of jurisdiction we reviewed. The high incidence of documentation addressing jurisdiction may be explained by the fact that jurisdiction should be verified throughout the complaint process. Consequently, there is more than one opportunity to identify deficiencies. FHEO officials at one location said that in addition to an intake analyst, the intake and enforcement or compliance branch chiefs also reviewed complaints before investigations began. However, they also said that in some instances, complaints for which jurisdiction had not been established had been inadvertently accepted and later found to lack a required jurisdictional element. For example, one case we reviewed showed that as the investigation proceeded, investigators determined that the respondent was exempt as a result of not owning more than one rental property. The Act provides that certain properties and property owners are exempt, and owners who do not have an interest in more than three single-family homes or condominium units meet the guidelines for exemption. Although most files contained evidence that jurisdictional elements had been addressed, we did not always find evidence that complainants and respondents received required notifications. As noted above, the Act requires the fair housing agency handling the complaint to send complainants an initial notice acknowledging the filing of the complaint. Further, HUD’s complaint notification letter advises complainants of certain guidelines of the complaint process. Respondents must be served by certified mail or personal service with an initial notice of the original complaint no later than 10 days after the complaint is filed or when the respondent is identified. Respondents must also be notified whenever a complaint is amended (complaints may be amended at any time during an investigation to add or remove parties, and complainants must sign the new complaints). Complainants and respondents should also be notified when an investigation is closed. We therefore looked for evidence indicating that these requirements were followed in all cases. HUD regulations require that complainants be notified by certified mail or personal service, but FHEO officials said that some FHAP agency procedures do not require this. While we found initial notification letters addressed to complainants in 97 percent of the files and letters to respondents in 91 percent for closure types other than reasonable cause, we frequently did not find evidence that the letters had been received (see fig. 9). The lack of evidence in case files that complainants and respondents had received initial notifications does not necessarily mean that they did not in fact receive the notices. Some FHEO officials told us that certified mail receipts were sometimes maintained in separate files. We looked not only for evidence such as return receipts from certified mail or certificates of personal service, but also for correspondence indicating receipt or knowledge of the complaint notification. Fifty-nine percent of these cases contained evidence that complainants had received initial notifications, and 67 percent contained evidence that respondents had received initial notifications. Our survey of complainants that completed the investigation process also indicates that some may not have received initial notifications. Specifically, 86 percent said they received a letter informing them whether an investigation would be conducted. We did not observe a significant difference in documentation between organizations. For the cases where HUD had determined reasonable cause, we found initial notification letters addressed to complainants in 9 of the 12 files, and letters to respondents were found in 10. Evidence of receipt was greater for reasonable cause cases than for other closure types—8 and 9 of the 12 files, respectively, had evidence that complainants and respondents received initial notification letters. Some cases did not contain evidence that final closure notices had been addressed to complainants and respondents (see fig. 10). For closure types other than reasonable cause, 10 percent and 21 percent of the files, respectively, did not include copies of closure letters addressed to complainants and all named respondents. Investigations conducted by FHAP agencies were more likely not to have closure notices addressed to respondents (26 percent) compared with FHEO-investigated cases (8 percent). Similarly, FHAP agency-investigated cases did not have evidence of closure notices addressed to complainants 14 percent of the time, compared with 2 percent for FHEO-investigated cases. Our survey of complainants revealed that about 9 percent said they did not receive notification of the case being closed. We found that for the cases where HUD had determined reasonable cause, there were more notices of reasonable cause determination addressed to respondents than complainants. Specifically, 5 of the 12 files did not include notices addressed to complainants informing them that the FHEO investigation was completed, and of the reasonable cause finding. Two of the files did not include such notices addressed to respondents. For reasonable cause determinations, HUD’s regulations require that all parties to a complaint be notified of the reasonable cause determination by certified mail or personal service. For no reasonable cause determinations, the parties must be notified by mail and the notification must include a written statement of the facts upon which the determination was based. HUD guidance also states that for administrative closures, all parties and their designated representatives must be notified by regular or certified mail. Fifteen percent of the case files for closure types other than reasonable cause had evidence that the complaint had been amended. We also could not find evidence in all cases that copies of the amended complaints had been received by all respondents even though the statute requires that all respondents receive them. Finally, not all of the cases, where applicable, contained evidence that new respondents received a copy of the complaint. As stated, FHEO officials noted that not all FHAP agency procedures require certified mail notices, while HUD notifications are sent by certified mail. As with initial notifications, we looked not only for evidence of certified mail or personal service, but also other forms of evidence that a notification was made, including response letters or subsequent correspondence that indicated the parties’ knowledge. FHEO officials noted that the absence of notices in FHEO case files is more likely a clerical omission than a failure to follow procedure. Our file review showed that the Final Investigative Report (FIR) required for reasonable cause and no reasonable cause outcomes and the Determination showing the outcome of cases were not always present. The documents are intended to demonstrate that investigations were thorough and that the investigator’s conclusions were founded in fact and evidence. FIRs, which HUD guidance states fulfill the statutory requirement to document investigations, are used as a basis for preparing the charge for reasonable cause cases. FIRs should summarize the allegation and evidence, including such things as dates and summaries of contacts with parties, witness statements, descriptions of pertinent records, and answers to interrogatories. The Act requires FHEO, following the completion of the investigation, to make information derived from the investigation—including the FIR—available upon request to the parties to the complaint. The Determination includes the elements of jurisdiction as well as a summary of the complainant’s allegations, the respondents’ defenses, and the investigator’s findings and conclusions. For cases where HUD had determined reasonable cause, we found the FIR and Determination in all 12 of the files. However, for cases closed with a Determination of no reasonable cause, we found the FIR missing in 5 percent of the files, and a Determination missing from 8 percent. The percentage of FIRs for no reasonable cause cases was similar for FHAP agency and FHEO files. HUD requires that FIRs and Determinations of reasonable cause and no reasonable cause be approved by the FHEO regional director. FHAP agency managers approve and sign these documents for FHAP agency investigated cases. For cases where HUD had determined reasonable cause, we found that 11 FIRs and 10 Determinations had been signed. For cases with no reasonable cause outcomes, 71 percent of Determinations were signed, compared with only 45 percent of FIRs. For no reasonable cause outcomes, FHEO files showed more evidence of Determination approval—100 percent compared with 60 percent of FHAP agency Determinations. FHEO noted that missing signatures for FIRs and Determinations are more likely an oversight rather than a question of thoroughness or lack of review. They also noted that case files will not document informal means of review. As noted above, the Act requires that fair housing investigations be completed within 100 days from the date the complaint was filed, unless it is impracticable to do so. An investigation is completed when a Determination or charge is issued, a conciliation agreement is executed, or the complaint is otherwise closed. We estimate that 98 percent of all cases with closure types other than reasonable cause—including relatively more no reasonable cause cases—took more than 100 days to complete. If investigators do not meet the 100-day deadline, the investigating agency is required to notify both complainants and respondents in writing, explaining why the investigation is not complete. We found 100-day letters in each file for these closure types with investigations that took more than 100 days to complete. We found that in about two-thirds of these cases, the 100-day letter was sent after 100 days had passed. Moreover, about 14 percent of the notices we found were dated more than 130 days after the HUD filing date (see fig. 11). For cases where HUD had determined reasonable cause, all 12 investigations lasted longer than 100 days, but 2 of the files did not have copies of 100-day letters. According to FHEO officials in one location, for cases with complex issues, it was often difficult to meet the 100-day investigative requirement and conduct a thorough investigation. Officials in another FHEO location said that the 100-day time frame is a critical factor at day one, and a new initiative had been implemented to track and focus on cases at the 50-day mark. The FHAP agencies’ record of meeting the 100-day requirement is directly tied to their performance ratings and to the reimbursement they receive for completed cases. Officials at one FHAP agency stated that the 100-day requirement was a priority for each new investigation and their agency had established shorter investigative deadlines internally, using their own streamlined process to assist them in meeting the 100-day requirement. In our last report, we observed that the percentage of investigations completed within 100 days increased between 2001 and 2003, particularly for FHEO cases. Specifically, the percentage of FHEO investigations completed within 100 days increased from 17 percent in fiscal year 2001 to 50 percent in fiscal year 2003. We also noted that FHEO hub directors reported that the 100-day benchmark and the simultaneous need to conduct a thorough investigation were sometimes competing goals. In addition to the statutory requirements, HUD guidance recommends a number of activities that contribute to a more complete investigation. Among these are preparing investigative plans, conducting on-site visits and interviews, and requesting policy and procedure information from respondents. The guidance also recommends that investigators follow certain procedures before closing a case administratively. HUD officials said that an investigation may be thorough without following each recommended practice; moreover, as noted above, a lack of evidence does not necessarily indicate that a procedure was not followed. However, the relative infrequency with which some practices were used, according to documentation in the case files, raises questions about investigation thoroughness. HUD guidance states that investigative plans are critical to efficient and effective investigation. The guidance also provides extensive instruction on preparing plans, and adds that, in developing investigative plans, investigators and their supervisors should consult with Regional Counsel. According to HUD’s revised Title VIII Handbook, investigative planning allows supervisors and investigators to ensure that the scope of the investigation is carefully tailored for adequate investigation of all claims made in the complaint, and careful planning should also prevent “over-investigation” of claims. However, FHEO officials stated that most experienced investigators do not prepare investigative plans for any except technical and very complex cases since investigators are familiar with the procedures for more common discrimination cases. Our file reviews showed that 62 percent of the cases with closure types other than reasonable cause did not include investigative plans. Further, the plans we found were not as detailed as the guidance suggests. For example, while the type of discrimination was identified in 93 percent of plans, the theory of discrimination was identified in 65 percent.55, 56 Also, planned interviews with complainants and respondents were specified in 59 percent and 62 percent of plans, respectively, and a list and sequence of interview questions was present in 27 percent of plans (see fig. 12). For cases where HUD had determined reasonable cause, we found an investigative plan in 10 of the 12 files, but these also contained little detail. For example, the type of discrimination was identified in all 10 of the plans, but the theory of discrimination was identified in only 3. Additionally, only 2 and 3 of the plans, respectively, specified interviews with complainants and respondents, and a list and sequence of interview questions was present in only 2 of the plans. With regard to supervisory review, officials at the agencies we visited stated that there was no established procedure for documenting review of investigative plans. According to HUD’s Title VIII Handbook, violations of illegal discrimination of the Fair Housing Act may be established under either (1) a disparate treatment theory, which is also known as “discriminatory intent” or (2) a discriminatory impact theory, which is also known as “discriminatory effect.” The disparate treatment theory includes overt discrimination cases where there is direct evidence of intentional discrimination and other cases where there is only circumstantial evidence supporting an inference of a discriminatory motive. Further, there are single motive cases and mixed motive cases. The particular theory of the case determines the evidence needed to prove or rebut the allegations. These theories developed in federal employment discrimination cases, but they are generally applied by the courts to cases brought under the Fair Housing Act. The confidence intervals for these estimates are 84 percent to 98 percent and 54 percent to 76 percent, respectively. Files with investigative plan content (out of files that contained investigative plan) We found substantially more investigative plans in FHEO case files (74 percent) than in FHAP files (24 percent). There was little variation in the percentage of cases with investigative plans across closure types (see fig. 13). HUD’s training manual for fair housing investigators states that interviews are a vital part of collecting evidence, and that they allow an investigator to probe for additional information that otherwise might not be provided. The manual recommends that investigators interview not only complainants and respondents, but also other individuals, such as witnesses for the parties and independent witnesses. FHEO officials noted that there are circumstances where complainant or respondent interviews are not necessary, such as when a case is conciliated before the investigation has begun or when an investigator determines that a respondent is exempt under the Act. The majority of cases with closure types other than reasonable cause included interviews with the parties to the complaint, but 28 percent did not include interviews with respondents. Cases with no reasonable cause outcomes were more likely to include interviews with complainants and respondents (see fig. 14). FHEO cases for these closure types were more likely than FHAP cases to include interviews with complainants. Specifically, we estimate that FHEO cases did not include interviews with complainants 8 percent of the time, compared with 21 percent of the time for FHAP cases. We found at least one respondent interview for all cases where HUD had determined reasonable cause. By contrast, only seven of these cases included interviews with complainants. In a number of cases, investigators interviewed complainants and respondents once or twice. Forty-seven percent of cases with closure types other than reasonable cause showed one or two complainant interviews, and 40 percent showed one or two interviews with a respondent or representative. We found that the content of interviews recorded in the TEAPOTS database varied widely. We looked specifically for evidence explaining why interviews were not conducted or whether it was apparent why interviews were not conducted. While we often could not find documentation as to why complainants and respondents were not interviewed, the reason was apparent in a number of cases. We also looked at the time frame within which the parties were interviewed for the first time following the official HUD filing date. The 100 days that HUD allocates for conducting fair housing investigations begins on the date a complaint is officially filed with HUD. HUD’s training manual for fair housing investigators suggests that investigators interview complainants first to clarify the details of the allegation and to evaluate the viability of the complaint. We found that complainants were typically interviewed first, but that in a number of cases, initial investigative interviews were conducted weeks and even months after the complaint had been filed. For closure types other than reasonable cause, 65 percent of cases with complainant interviews showed the first investigative interview occurring more than 2 weeks after the complaint was filed; this was also the case for 67 percent of respondent interviews. Further, 52 percent of cases with complainant interviews showed the first investigative interview occurring more than 1 month after the complaint was filed; this was also the case for 46 percent of respondent interviews. FHEO officials noted that it may be appropriate to conduct a respondent’s initial interview after first receiving documentation. Respondents have 10 days to provide a response to the complaint. FHEO officials noted that in addition to caseload, the time involved in mail delivery and the difficulty of locating people who have moved are factors that can impact the initial interview with the parties. Officials at some of the FHAP agencies we visited indicated that time may be of the essence in housing discrimination cases since housing opportunities may be lost during the initial filing period. For pending evictions, one agency’s practice was to request an eviction abeyance through the court or to petition the landlord to postpone eviction pending a speedy investigation. To expedite the filing process, the intake coordinator at another agency may go on site to have the complaint form completed and signed in order to begin the investigation. HUD’s guidance does not require that an investigator visit the site of the alleged discriminatory act, such as the subject dwelling and respondent’s place of business, and FHEO officials stated that many cases do not require an on-site visit. However, the guidance also states that an on-site visit is the most efficient way to conduct an investigation in most situations involving fair housing complaints. Additionally, it states that while such a visit may not appear necessary at the beginning of an investigation, issues may develop as a case progresses that support the necessity of an on-site investigation. One FHAP agency we visited had a policy of conducting on-site interviews or inspections for each investigation, and officials there reported that 95 percent of their cases included visits to the property in question to conduct interviews and to obtain supporting documentation. In contrast, officials at FHEO offices and other agencies said that limited financial resources and time constraints may prevent investigators from including on-site visits as a routine part of their investigations. About three-quarters of the cases we reviewed with closure types other than reasonable cause showed no evidence that investigators made on-site visits. For non-cause cases that included an on-site visit, we found that investigators toured the property in question, collected physical evidence such as photographs, and interviewed complainants and respondents (see fig. 15). We saw no statistically significant difference in use of on-site visits among the non-cause closure types, but we found substantially more on-site visits documented for cases where HUD had determined reasonable cause. Ten of these cases included on-site visits, and investigators generally carried out more activities while on site. The investigator toured the property in 4 cases, physical evidence was collected in 9, and both complainants and respondents were interviewed in 7 of the 10 cases. HUD’s training manual for fair housing investigators recommends that investigators request information on respondents’ policies and procedures to compare the established policies and procedures with the alleged discriminatory practice. Policy and procedure documents may take a variety of forms, including lease agreements and housing covenants. We found that 74 percent of cases for closure types other than reasonable cause contained evidence that investigators requested policy and procedure information from respondents. Although policy and procedure documents are not always necessary to establish reasonable cause, we found that such documents were requested in all of the cases where HUD had determined reasonable cause. The manual also suggests that investigators request comparative information, especially in cases alleging unequal treatment, about persons in the same protected class as the complainant and persons not in the complainant’s protected class. We found that the files for 58 percent of cases for closure types other than reasonable cause showed that comparative information was requested. FHEO officials noted that for cases involving refusal to rent, they generally would expect that investigators collect such information. However, in cases involving design and construction for access by persons with disabilities, such information typically is not necessary. For cases with closure types other than reasonable cause, we found that 30 percent of cases where refusal to rent was at least one issue did not show evidence that comparative information was requested. For cases where HUD had determined reasonable cause, we saw evidence in 10 of the 12 files that comparative information was requested. Comparative information was requested most often for cases with a finding of no reasonable cause (see fig. 16). Specifically, we estimate that 82 percent of no reasonable cause cases had comparative information requests. FHEO officials believe that recommended practices for planning investigations, interviewing complainants and respondents, conducting on-site visits, and seeking policy information need not be carried out in every case. They believe that every case is unique and each investigation should be tailored to the case. Our prior report noted that at least one FHAP agency had developed software that automatically generated a list of critical documents that were usually needed for certain types of investigations. According to officials of this FHAP agency, this system improved the quality of investigations and decreased the length of cases. Finally, we found that investigators documented multiple attempts to reach complainants before closing complaints administratively. Twenty-one percent of the cases we reviewed had been closed administratively, most of them because complainants withdrew their complaints. Relatively few of the cases closed administratively resulted because a complainant could not be located or was uncooperative with the investigator. Nonetheless, investigators documented as many as 11 attempts to contact uncooperative complainants by telephone, certified mail, and regular mail. The investigating agency is required to notify the parties of administrative closures as with other closure types, and 82 percent of administrative closures had evidence that such notices had been addressed to the parties. Selected information entered in TEAPOTS was generally consistent with the information found in source documents in the case files, but use of the system varied considerably among agencies conducting fair housing investigations. Complete and reliable TEAPOTS information is important for each case since the database is used to record activities and information throughout the investigation and subsequently as a resource for preparing investigative reports. In addition, HUD officials point to TEAPOTS as a control to determine that investigations are conducted in accordance with statute and regulation. Without including evidence as an investigation is completed, TEAPOTS cannot provide an accurate representation of the evidence. As part of our review of case files, we traced information in TEAPOTS relating to the basis and issue of discrimination, the date of the last alleged violation, and the HUD filing date and found that it generally matched source document information, with some exceptions. We found that for more common complaint issues—discriminatory terms and refusal to rent—the information matched 83 percent and 81 percent of the time, respectively. Similarly with regard to the discrimination basis, the information matched 92 percent and 84 percent for race and national origin, respectively, but the evidence is less clear for color. For familial status, the information matched 86 percent of the time. When we compared the date of the last alleged violation and the HUD filing date in TEAPOTS with the same dates in the source documents, we found that these matched 91 percent and 84 percent of the time, respectively. Also, an estimated 91 percent of the last alleged violation dates in TEAPOTS matched the source documents. Finally, filing dates in TEAPOTS matched those in the source documents in 84 percent of cases. In reviewing TEAPOTS reports for investigation details, we saw that the amount of information varied a great deal depending on which agency had investigated the case and entered the information. In some instances, entire sections such as those for recording interviews, case chronology, and the investigator’s findings and conclusion had not been completed. Without a complete record of an investigation, investigators may be unable to utilize TEAPOTS functions for preparing investigative reports. Further, because the statute requires that information derived from investigations, including the FIR, be made available to complainants and respondents upon request, it is pertinent that all appropriate details be included. Although HUD requires fair housing agencies to attempt conciliation throughout the complaint process, our review of case files, survey of complainants, and test calls revealed that FHEO and FHAP agencies did not always attempt to conciliate complaints, made limited efforts to do so, or did not meet HUD’s requirement that they document these efforts. While having the fair housing specialists act as both investigators and conciliators is permitted, investigators faced with pursuing conciliation as well may focus on investigative activities, particularly considering FHEO’s emphasis on completing investigations within 100 days. FHEO’s Title VIII Handbook states that conciliation should be discussed during the initial intake interview and should be noted in the standard notification letters to the complainant and respondent. Further, the statute requires that conciliation must be attempted to the extent feasible commencing with the filing of the complaint and concluding with the issuance of a charge on behalf of the complainant, or upon dismissal of the complaint. For time-sensitive complaints, conciliation may be the most effective procedure and, given that the resources of FHEO and FHAP agencies are limited, an effective means of reducing staff workloads. FHEO’s General Deputy Assistant Secretary for Fair Housing and Equal Opportunity noted that conciliation is an integral part of the complaint process. He noted that FHEO officials have “confidence that every party is informed during the initial interview or contact of HUD’s statutory obligation to attempt conciliation and is asked about the possibility of conciliating the complaint and what it might take to effect conciliation.” According to our survey of complainants, fewer than half (42 percent) were offered assistance with conciliation (see fig. 17). In 26 percent of the cases, complainants said staff suggested that the parties work out their differences on their own. Further, during the test calls we placed to FHEO and FHAP agencies, we found the possibility of conciliation was discussed in only 18 percent of the locations we called. FHAP agencies mentioned conciliation 23 percent of the time, and FHEO did not mention conciliation at all. Based on our survey of complainants, we estimate that nearly 90 percent of complainants who were offered conciliation accepted it. About 12 percent of complainants said they sought help with conciliation through other organizations. The percentages did not vary among cases depending on whether they were investigated by FHEO or a FHAP agency, but did vary based on closure type, with approximately half of the complainants who were offered conciliation far more likely to have their case end with a conciliation outcome. We estimate that complainants with conciliation outcomes were offered conciliation 67 percent of the time, while those with no-cause outcomes were offered conciliation just 27 percent of the time. Overall, we estimate that parties involved in fair housing complaints were more likely to reach conciliation agreements when FHEO or a FHAP agency was involved. For example, we estimate that complainants reached agreement 64 percent of the time when FHEO or a FHAP agency assisted with conciliation, but only about 35 percent of the time when another organization did the conciliation. Complainants who worked with FHEO or a FHAP agency were satisfied with the outcome about 81 percent of the time and with the other party’s compliance with the agreement more than 90 percent of the time, compared with 47 percent and 90 percent, respectively, for complainants working with other organizations. Our previous report noted that one FHAP agency was experimenting with a separate “mediation track” when handling complaints. At this agency, mediation attempts occurred early in the process during intake and involved a professional private-sector mediator. The mediation had usually pleased the parties, resulting in timely resolutions of cases and beneficial results. Two FHAP agencies we visited for this study offered mediation. Further, other FHAP agencies had begun offering mediation during the intake stage, and the complainants’ decision to participate or not participate in mediation was almost always documented. One FHAP in its notification letter to the complainant offered the complainant the choice of two options—either conciliation or investigation. Staff from one FHEO regional office noted the use of mediation by a FHAP agency in their region reduced complaints by approximately one-third. During our file review of no-cause cases, we found no documentation of conciliation attempts around a third of the time and often no documentation of contacts with either party to attempt conciliation. Specifically, an estimated 36 percent of the files contained no evidence that complainants had been contacted to attempt conciliation, while 30 percent of files lacked evidence that respondents had been contacted to attempt conciliation. For the 12 cases with outcomes of reasonable cause that we reviewed, we found documentation that conciliation was attempted in 11 of them. When an agency contacted parties to a complaint, they often did so once or twice. For example, we estimate that 21 percent of complainants in cases with no-cause outcomes were contacted only once. As indicated in figure 18, we also found that conciliation attempts varied somewhat by closure type. Conciliation was attempted with complainants less often for cases that were closed administratively. However, for all cases, we found that information on conciliation contained in the case file varied tremendously, with some cases noting “conciliation discussed,” while others included significant details. As is the case with other aspects of our file review, the lack of evidence of conciliation attempts does not necessarily indicate that such attempts did not occur; attempts may not have been documented. However, according to FHEO’s Title VIII Handbook, the lack of detailed documentation of conciliation attempts could be problematic. For example, the Handbook noted that FHEO Headquarters or the Office of General Counsel occasionally questioned the sufficiency of conciliation efforts for cases forwarded to FHEO Headquarters with recommendations of a Determination of reasonable cause. Further, it stated that when these questions were asked, the case was often remanded to the agency handling the case with instructions to undertake additional late-stage conciliation efforts. The Title VIII Handbook noted that a number of these remands result not from a lack of initial conciliation attempts, but rather from a lack of documentation of conciliation attempts in the case file. Conciliation attempts should be documented in TEAPOTS before a case can be closed. TEAPOTS includes a separate section that allows the intake specialist and investigator to document conciliation efforts. While HUD relies on TEAPOTS as a control to assure that investigations, including conciliation attempts, are performed in accordance with statute and regulation, information that we obtained from TEAPOTS shows that its use varied from one location to another. For example, the descriptions of conciliation attempts varied in detail, and the case information recoded by some FHEO and FHAP offices did not include a chronological listing of conciliation attempts as suggested by the Title VIII Handbook. Finally, we found that conciliation agreements were generally well-documented. Specifically, an estimated 91 percent of cases closed with conciliation included copies of conciliation agreements in the case file. A HUD conciliation agreement is a written, binding agreement to resolve the disputed issues in a Title VIII housing discrimination complaint. The HUD conciliation agreement must contain provisions to protect the public interest in furthering fair housing. According to the Act, a conciliation agreement requires HUD approval. Approximately ninety-five percent of these agreements were signed by all parties and approximately 90 percent were approved by FHEO or the FHAP agency. Based on our survey, complainants whose cases were conciliated were more positive with this outcome than those who experienced administrative closure or a finding of no-cause, but complainants felt at least some pressure to conciliate their complaints, most commonly because they felt their cases would not be handled otherwise. As shown in figure 19, 52 percent of the complainants surveyed indicated that they felt a little to great pressure to resolve their cases. In commenting on a draft of this report, the General Deputy Assistant Secretary for Fair Housing and Equal Opportunity noted that the Fair Housing Act mandates attempts at conciliation and that this statutory construct may result in what complainants perceive to be pressure to resolve their case. Our survey of complainants indicated that conciliation seemed to work when an agency such as FHEO or FHAP agencies were involved (over 64 percent resulted in a conciliation). However, a significant number of survey complainants indicated that this opportunity was not presented to them. Moreover, those who sought conciliation assistance from any other organization were less likely to reach a satisfactory outcome; 47 percent did not realize a satisfactory agreement. Our previous report noted that investigators at some FHEO locations and FHAP agencies customarily conciliated their own cases, while other locations usually used separate investigators and conciliators. Also, our previous report noted that officials were divided on the impact of this practice. Some officials told us that having the same person performing both tasks had not caused problems. Other officials—including some at locations where investigators conciliated their own cases—indicated a preference to have different people perform these tasks. One official said that separating these tasks enabled simultaneous conciliation and investigation of a complaint, a practice that speeded up the process. Another official noted that parties might share information with a conciliator that they would not share with an investigator and that a conflict of interest might result if one person tried to do both. The same official said that although investigators were not allowed to use information they learned as conciliators during investigations, the information could still influence the questions conciliators posed—and thus the information they learned—as investigators. Similarly, at one FHEO location hub, an OGC official told us that information learned as a result of conciliation efforts should not be included in investigative findings. A few enforcement officials at locations that did not separate the functions said they did not have enough staff to have separate conciliators. We recommended in our previous report that FHEO establish a way to identify and share information on effective practices among its regional fair housing offices and FHAP agencies. According to the Title VIII Handbook, conciliation may be a fertile source of information regarding a respondent’s housing practices. However, the Handbook notes that nothing said or done during the course of conciliation can be used in the investigator’s reasonable cause recommendation, in the final investigative report, or in any subsequent Title VIII enforcement proceeding. Information discovered during conciliation should not be made public without the written consent of the persons concerned. Although information discovered during the conciliation process cannot be factored into the investigator’s recommendation, if this same information is discovered outside of the conciliation process, it is permissible for investigators to use this information in their recommendations. For example, if respondents make an admission during conciliation negotiations, investigators cannot use this admission in their recommendations. However, if respondents make this same admission in a later deposition, the investigator can use this admission in their recommendations. In our previous report, we also noted that some HUD locations we visited put investigations on hold when conciliation looked likely, while others did not. Some fair housing officials at the locations that simultaneously investigated and conciliated cases told us that doing so not only expedited the enforcement process but could also facilitate conciliation. Because the parties were aware that the investigation was ongoing, two FHEO hub directors told us, they were sometimes more willing to conciliate. Additionally, some officials at the offices that delayed the investigation while attempting conciliation told us that this practice increased the number of calendar days necessary to investigate a case. However, one FHEO hub official told us that simultaneous investigation and conciliation could waste resources, as it might not be necessary to obtain further evidence in a case that would be conciliated. Overall, 6 of the 10 hub directors told us that simultaneous investigation and conciliation had a great or very great impact on the length of the complaint process, and all 6 said that the practice decreased the length. During our current review, officials from one FHEO regional office noted that using separate conciliators would definitely assist in making their investigative process more effective. However due to staffing constraints, they believed it was impractical to do so without a significant increase in staff. These officials noted that they needed additional staff to speed up the investigative process, separate investigation from conciliation, conduct more thorough investigations, and more effectively monitor compliance with conciliation agreements. While an estimated 44 percent of complainants were somewhat or very satisfied with the fair housing complaint process, based on our survey, we estimate that about half of all complainants were either somewhat or very dissatisfied. Similarly, nearly 60 percent were dissatisfied with the outcome of the fair housing complaint process, and almost 40 percent would be unlikely to file a complaint in the future. Complainants’ dissatisfaction varied for each stage of the complaint process, as well as by type of complaint closure (administrative, conciliation, and no-cause finding), with complainants in no-cause cases expressing the least satisfaction with various aspects of investigations. However, according to FHEO, the low satisfaction levels of complainants with a no-cause finding is not wholly unexpected given that they failed to receive the desired outcome and thus question the process that produced the outcome. Overall, about 34 percent of all complainants were satisfied with both the process and the outcome; conversely, 48 percent of all complainants were dissatisfied with both the process and the complaint outcome (see fig. 20). When looking at complainants’ overall satisfaction level, we found no significant differences when we sorted by type of agency--that is, between cases investigated by FHEO and those investigated by FHAP agencies. However, variations by closure type existed. For example, over two-thirds of those with a no-cause finding were dissatisfied with both the process and the outcome of their complaint (about 68 percent). In contrast, over two-thirds of those closed through conciliation were satisfied with both the process for handling their complaint and its outcome (about 68 percent). Also, just under half of the complainants with administrative closures were dissatisfied with both the process and outcome (about 43 percent). While an estimated one-half of all complainants—regardless of their case outcomes--were either somewhat or very dissatisfied with their experience with the overall complaint process, the percentages expressing dissatisfaction with the intake and investigative stages were smaller (see fig. 21). Specifically, about 71 percent of complainants were somewhat to very satisfied with the intake stage and nearly 55 percent with the investigative stage. In addition, we found that complainants who were dissatisfied with the outcome of the complaint process were also likely to express dissatisfaction with the process itself. These results were consistent across FHEO offices and FHAP agencies. Further, complainants with a no-cause finding were more likely to be dissatisfied with both the process and outcome than complainants whose complaints were conciliated or closed administratively. For example, complainants with a no-cause outcome were somewhat to very dissatisfied with the process 72 percent of the time (23 percent were satisfied) and with the outcome of their cases 84 percent of the time, while those with a conciliation outcome reported dissatisfaction levels of 21 percent with the process (75 percent were satisfied) and 25 percent with the outcome. About 40 percent of complainants said they would be somewhat to very unlikely to file any future complaint with the same fair housing agency. These results did not differ significantly by type of agency but did differ by closure type. For example, complainants with a no-cause outcome said they would be somewhat to very unlikely to file another complaint about 56 percent of the time, compared with 14 percent of those whose cases were conciliated. Generally, complainants’ level of satisfaction with the process and its outcome did not vary with the expectations they had before talking to anyone at a fair housing organization. There were a few important exceptions, however. About 20 percent of complainants reported that they expected that the fair housing organization would not help both sides equally. These complainants were significantly more dissatisfied with the overall process and its outcome. The same was true for the approximately 30 percent of complainants that expected the fair housing organization would get the complainant a financial award. Half of all complainants had expectations that were not listed in our survey. Those complainants that had expectations other than those we listed were significantly more dissatisfied with each stage of the process, the overall process, and its outcome. These results may indicate that some complainants have different or greater expectations than others. According to our survey, about 71 percent of complainants were somewhat or very satisfied with the intake process (see fig. 22). More than half of the complainants reported that they received clear information during the intake process and that intake staff were courteous and mostly acted promptly. Yet, a substantial number of complainants gave poor ratings to specific aspects of the process, citing difficulty contacting intake staff and the lack of timeliness of some intake activities. These opinions were generally true across agency type and closure types. Complainants reported that intake staff at both FHEO and FHAP agencies provided understandable information more than half of the time, including satisfactory explanations about an agency’s decision on whether to pursue an investigation. In general, about 60 percent of the time complainants told us that they received clear information on the likely length of each step in the process and explanations of the complaint and investigative process. Moreover, an estimated 66 percent of complainants were very or somewhat satisfied with the way the organizations explained their decision to pursue or not pursue a case (see fig. 23). The results varied by closure type with the exception of the organization’s explanation of their decision whether to investigate. For example, fewer complainants with no-cause outcomes, relative to complainants with conciliation outcomes, felt that they had received understandable information on the time involved or felt that they had received explanations of the complaint and investigative processes. Based on survey results, complainants believe intake staff took action in a timely manner more than half the time on the intake activities we reviewed. Specifically, we estimate that about 70 percent of the time intake staff sought the complainant’s signature somewhat to very quickly after the initial contact and that 62 percent of the time intake staff acted somewhat to very quickly in deciding whether to pursue an investigation (see fig. 24). Staff performance in getting back to complainants was apparently less satisfying, with just 55 percent of complainants responding that intake staff acted somewhat to very quickly, a result that complements our findings on the difficulty of contacting intake staff. In general, these results were true regardless of whether FHEO or FHAP agencies had handled the complainants’ cases. Complainants with no-cause outcomes cited the slowest response time, saying that staff responded somewhat or very quickly only about 40 percent of the time. On the other hand, complainants whose cases were conciliated reported the fastest agency action for certain actions. In general, complainants felt that intake staff provided services in an acceptable and professional way (see fig. 25). We estimate that intake staff at both FHEO and FHAP offices treated complainants with courtesy and respect about 85 percent of the time and were helpful and impartial more than 70 percent of the time, according to complainants we surveyed. Complainants with no-cause outcomes reported positive treatment less often. For example, these complainants said that intake staff were helpful and interested in their complaint about 60 percent of the time and thorough about half of the time. We asked complainants whether intake staff carried out a variety of activities that were either required or that could be considered best practices—for example, did intake staff notify complainants whether the fair housing organization would pursue the case? According to survey results, FHEO and FHAP agency staff carried out some intake-related activities, such as seeking signatures for a complaint, more frequently and more quickly than others, such as taking action to prevent the loss of a housing opportunity. Among other things, our survey showed the following: Intake staff very often asked complainants to sign a complaint. This was true for about 90 percent of complainants who were working with a single fair housing agency. In contrast, only about 48 percent of those complainants who were working with two or more fair housing agencies were asked to sign a complaint. We did not observe any significant differences by type of agency (FHEO or FHAP) or closure type. Because we surveyed only complainants that had filed complaints, we would expect that all would have been asked to sign the complaint. According to the Title VIII Handbook, a complaint should generally be signed before it can be considered as filed, so as to provide protection against frivolous or false claims or inadvertent erroneous statements on the intake form. Complainants stated by a large margin—about 81 percent—that intake staff asked questions that would help the agency understand what led to the complaint, and 86 percent stated that the agency notified them with a decision on whether an investigation would be undertaken. Again, all complainants should be asked questions about their allegation, including information needed to satisfy the required elements of jurisdiction. In general, these results were similar across FHEO, the FHAP agencies, and the closure outcomes. Complainants reported that intake staff were less likely to take some actions or to ask certain questions (see fig. 26). For example, according to complainants we surveyed, about 69 percent of the time both FHEO and FHAP intake staff did not attempt to prevent the loss of a housing opportunity when asked to do so, although the percentage varied across outcome types. Complainants with administrative closures or conciliated cases were slightly more likely to report that staff took action for them than those with a no-cause outcome. Complainants also said that intake staff did not offer to resolve differences between parties about 45 percent of the time. Again, the results differed according to case outcome, with complainants who conciliated reporting the most offers (76 percent) and those with a no-cause outcome the least (37 percent). As previously discussed, the Act requires the fair housing organization to offer conciliation to the extent feasible in all cases. Although most complainants were satisfied with the investigative stage of the complaint process, they were generally less positive than they were about the intake stage. Further, a substantial number of complainants expressed dissatisfaction and concern about certain aspects of investigations. We estimate that about 40 percent of complainants were dissatisfied with the conduct of investigations, whether it was conducted by FHEO or a FHAP agency (see fig. 27). As they were with other activities, complainants whose cases were closed with a no-cause outcome were the most dissatisfied with the conduct of investigations, with nearly two-thirds reporting dissatisfaction. The concerns that led to this dissatisfaction included problems contacting staff, concern that staff did not perform actions such as informing complainants about options after their case was closed, and difficulty obtaining clear information. However, those whose cases were conciliated very often reported being satisfied with the investigation. Despite the concerns cited, complainants in general believed that staff treated them professionally and with respect and courtesy. We estimate that a quarter to a third of complainants had problems reaching investigators, believed that investigators performed poorly in providing case updates, and were dissatisfied with the amount of contact they had with investigators. An estimated one-quarter of the complainants found it hard or somewhat hard to reach investigators, and more than 30 percent of the complainants noted dissatisfaction with the amount of contact they had with investigators (see fig. 28). These results did not vary significantly depending on whether cases had been investigated by FHEO or FHAP agencies, but did differ according to the closure type. For example, more than one-third of complainants whose cases were closed with a finding of no-cause reported difficulties in contacting the investigators, compared with 22 percent and 17 percent, respectively, of complainants whose cases were closed administratively or conciliated. Complainants with a no-cause finding were most dissatisfied with the amount of contact they had with the investigator; 49 percent reported dissatisfaction, compared with 18 percent and 14 percent, respectively, for those having administrative or conciliation outcomes. Significant numbers of complainants reported that investigative staff performed administrative functions acceptably and in a timely manner but did only a fair or poor job on others. For example, complainants reported that they were told their investigator’s name about 83 percent of the time, and 92 percent said they received their closure notifications. But about 59 percent of complainants whose cases were administratively closed reported that they were not told about any options they might have had for pursuing a complaint. Further, overall many complainants believed that staff did a fair to poor job of listening to them (nearly 40 percent), explaining the investigative process (about 36 percent), investigating the evidence (about 44 percent), interviewing their witnesses (nearly 40 percent), and asking for documents related to their cases (about 32 percent). These percentages were generally similar whether complainants’ cases had been handled by FHEO or a FHAP agency, with one exception: Complainants indicated that FHAP agencies were slightly better at interviewing witnesses (33 percent excellent or good) than FHEO (23 percent). We found more differences among complainants’ experiences based on their closure outcomes. Complainants with no-cause outcomes were more likely to perceive difficulties in a variety of areas than complainants with administrative closures or conciliated cases. Specifically, complainants with no-cause outcomes more frequently reported problems receiving case updates (62 percent of the time, compared with 34 percent for administrative closures and 23 percent for conciliations); were more likely to believe that investigators did a poor job of investigating the evidence (68 percent of the time, compared with 37 percent for administrative closures and 20 percent for conciliations); and were more likely to believe that investigators did a poor job of interviewing witnesses (58 percent of the time, compared with 35 percent for administrative closures and 14 percent for conciliations). Despite these views, most complainants reported that staff moved quickly in conducting the investigation. About three-fifths of complainants said that both FHEO and FHAP investigators were prompt in contacting them to start an investigation, responding to questions, and completing an investigation. However, complainants with no-cause outcomes typically reported problems with timeliness at twice the rate of complainants with different outcomes (see fig. 29). Finally, we found that complainants generally felt they were treated well by investigators (see fig. 30), regardless of the type of agency that investigated the complaint or the closure type, with one exception: Complainants whose cases ended in a no-cause outcome usually felt less well treated by staff. For example, an estimated 83 percent said that investigators treated them with respect and courtesy, 74 percent said that staff were interested in their complaint, about 72 percent believed that staff were impartial, and around 71 percent found staff helpful. In general, complainants had similar responses for FHEO and FHAP investigations. As noted, those with a no-cause outcome were less complimentary, and only 53 percent of them believed that the investigator had been helpful, versus 78 percent and 90 percent, respectively, for those with administrative or conciliation closures. In our April 2004 report, we found that persons who have experienced alleged discrimination in housing can sometimes face a lengthy wait before their complaint is resolved. In preparing this current report, the results of the test calls even though they are not generalizable to all potential complainants and our survey of complainants that is generalizable— suggest that some complainants may also face difficulties from the outset—that is, during the intake phase—in contacting staff and presenting their initial allegations. We also previously reported that without comprehensive, reliable data on the dates when individuals make inquiries, FHEO cannot judge how long complainants must wait before a FHAP agency undertakes an investigation. For this report, our analysis of logged contacts indicates that FHAP agencies and FHEO hubs did not enter into TEAPOTS all contacts alleging Title VIII violations. Moreover, the discrepancies we observed between the dates of logged initial contacts and the corresponding dates entered into TEAPOTS as the beginning of the inquiries indicate that FHEO does not have reliable data for measuring the extent to which its offices and FHAP agencies meet the benchmark of 20 days for completing the intake process. Our review of case files and TEAPOTS data showed a lack of evidence that investigations are consistently as thorough or expeditious as FHEO guidance requires or recommends. HUD officials have noted their reliance on TEAPOTS for assurance that cases are investigated in accordance with applicable requirements and guidance. While the lack of evidence we observed may be the result of not documenting certain actions, rather than not carrying them out, the very lack of documentation or detailed TEAPOTS records raises questions about HUD’s ability to assure that investigations are as thorough as they need to be. The Fair Housing Act has from the outset mandated that persons alleging housing discrimination be offered the opportunity to conciliate their complaint with the other party. However, our review of case files and TEAPOTS data showed a lack of evidence that conciliation attempts were consistently made throughout the process, despite HUD’s requirements that these attempts be documented. The lack of documentation that conciliation is offered raises questions about HUD’s ability to assure that such attempts are made as appropriate throughout the fair housing process. Further, this lack of documentation, along with complainants’ first-hand experience and our observation during mock complaint calls that conciliation was not discussed, suggest that FHEO and FHAP agencies are not consistently offering conciliation, as required by the Act. We recommended in our previous study that FHEO establish a way to identify and share information on effective practices among its regional fair housing offices and FHAP agencies. We observed during our previous study and this study that some FHAP agencies have used independent mediators and had staff not involved in a particular case attempt conciliation. Officials who use these techniques point to their benefits in speeding up the resolution of complaints, while offering the parties a satisfactory outcome. In our last report, we concluded that FHEO’s human capital challenges serve to exacerbate the challenge of improving enforcement practices. However, the identification and use of best practices may help FHEO, as well as FHAP agencies, more effectively utilize their limited resources. Our work does not demonstrate that HUD failed to reach appropriate decisions regarding any specific fair housing inquiry or investigation. Further, our review of case files shows that many investigative requirements were met, and former complainants we surveyed expressed satisfaction with some aspects of their experience. Nevertheless, we believe that our findings are cause for concern. Individuals who believe they have experienced discrimination and make the effort to contact a fair housing agency, but are unable to easily reach an intake staff person or to expeditiously convey needed information, may simply give up and cease cooperating. Further, our survey of former complainants shows that some who successfully filed complaints have a sufficiently negative view of the process that they would be unlikely to file a complaint again—even if they were satisfied with the outcome of their case. Events of either type diminish the Act’s effectiveness in deterring acts of housing discrimination or otherwise promoting fair housing practices. To ensure that complainants are able to readily contact a fair housing agency and file a complaint, we recommend that the HUD Secretary direct the Assistant Secretary of FHEO to ensure that intake activities are conducted consistently. Specific actions may include establishing clear standards for information that should be collected with the initial contact; creating benchmarks and performance goals for the treatment of complainants during the initial contact, including measures of responsiveness, such as hold times and call-back timeliness, as well as measures of completeness of initial information collection; developing special procedures for identifying and responding to establishing means (including automation, where appropriate) of assuring that standards, benchmarks, and special procedures are followed. To improve the usefulness of TEAPOTS as a management control in assuring that potential Title VIII-related contacts are identified and assessing performance in meeting timeliness guidelines, we recommend that the HUD Secretary direct the Assistant Secretary of FHEO to take the following two actions: Specify that FHAP agencies use TEAPOTS for recording initial inquiry dates for all inquiries, as defined in the Title VIII Handbook, that allege housing discrimination. Require that the initial inquiry date reflect the first contact made by the complainant, regardless of whether that contact was with FHEO or a FHAP agency. To enhance FHEO and FHAP agency ability to assess the thoroughness of investigations, we recommend that the HUD Secretary direct the Assistant Secretary of FHEO to take the following two actions: Establish documentation standards and appropriate controls to ensure that required notifications of complaint, amendment, and closure are made and received, and that 100-day letters are sent before an investigation has reached 100 days. Clarify requirements for planning investigations, including specifying when plans must be prepared, their content, and review and approval. To ensure that some form of conciliation is made available for all complainants, we recommend that the HUD Secretary direct the Assistant Secretary of FHEO to take the following two actions: Work with FHAP agencies and others to develop best practices for offering conciliation throughout the complaint process, including at its outset. Ensure that investigators comply with requirements to document conciliation attempts, and complainants’ or respondents’ declination of conciliation assistance. We provided a draft of this report to HUD for its review and comment. We received written comments from the department’s General Deputy Assistant Secretary for Fair Housing and Equal Opportunity. The letter, which is included in appendix II, indicated a general agreement with our conclusions and recommendations. The General Deputy Assistant Secretary noted that FHEO conducts analyses of its programs and strives to continually improve its operations and those of FHAP agencies in order to ensure that complaints of housing discrimination are handled in an effective and efficient manner. The letter also expressed confidence based on the extensive internal reviews, final determinations, and requests for reconsideration, in the integrity of the fair housing process, the soundness of decisions and the competent professional service accorded every party to the process. The letter noted a variety of initiatives that have been implemented to improve the quality of investigations, including establishing the National Fair Housing Training Academy, which trains fair housing professionals on fair housing law, critical thinking, and interview techniques; completing revisions of its Intake, Investigation and Conciliation sections of the Title VIII Handbook, which provides guidance to investigators on case processing standards and sets nationwide policy; developing the FHEO-OGC Case processing Research Project, which focuses on early interaction and continuous consultation between FHEO and OGC; and undergoing a business process re-engineering (BPR) to identify best practices in the field among the FHAP agencies, as well as codifying operations and procedures in headquarters. The General Deputy Assistant Secretary commented that FHEO would, as feasible, work to incorporate the recommendations into its policies and procedures. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Chair of the Senate Committee on Banking, Housing and Urban Affairs; the Chair, Subcommittee on Housing and Transportation, Committee on Banking, Housing and Urban Affairs; the HUD Secretary; and other interested congressional members and committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6878 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our engagement scope was limited to fair housing investigations conducted under Title VIII of the Civil Rights Act of 1968, as amended. We did not address fair housing activities under Section 504 of the Rehabilitation Act of 1973 or Title VI of the Civil Rights Act of 1964. For certain analyses of the fair housing complaint process, we relied on national samples of complaints closed during the last half of 2004— enabling us to provide national estimates. To determine the thoroughness of efforts by the Office of Fair Housing and Equal Opportunity (FHEO) and Fair Housing Assistance Program (FHAP) agencies during the intake process, we conducted two activities. First, we asked intake staff to keep a log of the contacts they had with potential complainants over a 4-week period and to note the proportion of contacts alleging housing discrimination. We designed an Intake Contact Log for staff to use that enabled us to obtain never-before-collected data consistently across agencies for a set time period. We asked the 10 FHEO offices, 36 state FHAP agencies, and 5 local FHAP agencies with the highest volume (based on number of complaints filed during fiscal year 2004) to maintain the log over the 4-week period from February 21 through March 21, 2005. All of the FHEO offices, local FHAP agencies, and all but 4 of the state FHAP agencies agreed to maintain our contact log. These offices represented 78 percent of the volume of investigations in 2004. The log required intake staff to document the date each contact was received; the method of each contact (such as telephone, mail, e-mail); whether the contact involved a new potential complaint or a previously existing complaint, or was a referral from another agency; whether the callers claimed that they had experienced housing whether the intake staff or supervisor believed the contact potentially involved a jurisdictional Title VIII violation; and the name of the individual making the contact. Once we received the data, we reviewed them for consistency and logic. Where we identified coding that was apparently not consistent with our instruction, we called the staff that prepared the log for clarification. In some cases, we needed to recode the log based on these conversations. We focused our analysis on entries that the intake staff indicated pertained to a potentially valid Title VIII issue, that included names, and that they coded as new potential complaints. Using entries with names allowed us to eliminate multiple contacts from the same person and report statistics on people rather than on contacts. Once we identified the unique names that met these criteria, we matched the names to records in a TEAPOTS extract to determine how many of the new potential complaints during the 4-week recording period were perfected. Specifically, we requested an extract of the Department of Housing and Urban Development’s TEAPOTS database for the time period coinciding with the contact log reporting period, plus 5 additional weeks to allow sufficient time for perfecting complaints. We did not evaluate the judgments of the intake staff in determining whether a contact should have been pursued. Since FHAP agencies tended not to enter inquiries into TEAPOTS until the complaint was ready to be perfected, the results of our name matching are reported both in aggregate and separately for FHEO office and FHAP agencies. We also reported on the total number of intake-related contacts received during the reporting period, the proportion of contacts alleging housing discrimination that the intake staff determined did not constitute a valid Title VIII complaint, and percentages for each contact method (telephone, e-mail, and walk-in). In order to assess the degree to which intake staff obtained sufficient and appropriate information to determine whether a contact should become a Title VIII complaint, we designed a telephone “test call” program of intake staff at the 10 HUD hubs and 36 state FHAP agencies. GAO analysts posing as complainants contacted intake staff to file a mock complaint. We excluded local FHAP agencies from telephone testing because these agencies tend to have a low volume of investigations, compared with state FHAP agencies. We placed one test call to each site using the same case scenario with different identifying information, such as names and addresses. Testers were trained to consistently volunteer only certain information such as the “name” and description of what happened, and to respond in standardized ways to questions asked by intake staff. The calls were recorded and later coded against a list of information that might be sought as part of the intake process. We developed this list of information based on requirements and recommended practices derived from multiple sources. Among these were requirements of the Fair Housing Act, guidance provided by HUD in the form of policy and training manuals, and training materials from the National Fair Housing Alliance, as well as discussions with HUD and FHAP agency officials. We categorized the information into four levels: information that (1) should always be gathered at intake, (2) is potentially applicable to all complaints and should be collected, (3) is relevant to a particular basis or protected class, and (4) is considered by officials we spoke with and training materials as a best practice. We placed one pretest call to each of the 46 sites we planned to contact to get a sense of how each location conducts intake. We adjusted the design to account for differences in the intake process to the fullest extent possible. For example, we allowed for scheduled intake interviews for locations that only conducted intake calls on a scheduled basis. We found that approximately 25 percent of the time we could expect to speak with a person on initial contact who could complete the intake process. In many cases, an agency staff member would perform an initial screening before forwarding the call to the intake staff, and in other cases, the call was forwarded to a voice mailbox. Based on our findings during the pretest, we decided to evaluate not only the information collected by intake staff, but also the number of attempts required to speak with a live person, hold times, and the length of time that elapsed before staff responded to voice- mail messages by returning calls. We included information that callers volunteered and obvious items such as gender. Because of the limitations of our sample—only one call to each site—the results of our analysis are not generalizable to the entire population of potential complainants in housing discrimination cases. To address the thoroughness of investigation procedures, including conciliation attempts, we reviewed the documentation in 197 randomly selected case files of housing discrimination cases completed during the last 6 months of 2004 around the country (see table 1). We originally sampled 205 cases, but we were unable to locate files or matching TEAPOTS data records for 8 cases. The sample files included 58 cases closed administratively, 63 cases that were conciliated without a finding of reasonable cause, and 90 that were closed with a finding of no reasonable cause. We oversampled administrative closures to ensure that we had a sufficient number of files to permit estimates for this subpopulation. The population of complaints and the sample we used are enumerated in tables 1 and 2. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates (sampling error), our results have confidence intervals of plus or minus 8 percentage points or smaller, unless otherwise noted, at a 95 percent level of confidence. In other words, this interval would contain the true value for the actual population for 95 percent of the samples we could have drawn. We also reviewed files for 12 of the 15 complaint investigations FHEO concluded with a finding of “cause,” and for which the adjudication process, including any agency monitoring, had been completed during the last 6 months of 2004. We could not locate files for the remaining cases. All 12 files were identified by the Department of Justice as having met these criteria. We did not review files for which a FHAP agency found “cause” and that completed the adjudication because these could not be identified. We examined the documentation in the files, as well as the full TEAPOTS case summary for each case, to determine whether it demonstrated that the investigator had met certain requirements and best practices for conducting fair housing investigations. We identified these requirements and best practices through reviewing the Fair Housing Act, implementing regulations, FHEO’s Title VIII Handbook and training material, and other guidance. In addition, we interviewed FHEO officials at both HUD headquarters and field offices in Atlanta, Chicago, and San Francisco. We also interviewed FHAP agency officials in California, Georgia, Maryland, South Carolina, and Virginia. We met with the National Fair Housing Alliance and attended training at the John Marshall Law School on Fair Housing enforcement and at HUD’s National Fair Housing Training Academy. We provided a draft summary of our criteria to FHEO officials and made technical corrections based on their comments. To ensure the consistency of our file review, we developed a structured data collection instrument. We anchored each item on the instrument to the criteria identified above. We pretested the instrument with several team members, and based upon this test, modified the instrument to ensure clarity. For 10 percent of all files reviewed, another team member reviewed the coding, to ensure its accuracy. We surveyed a sample of complainants whose cases had been investigated and closed by FHEO and FHAP agencies from July 1 to December 31, 2004, to determine levels of satisfaction with the thoroughness, fairness, timeliness, and outcomes of the intake and investigation process. We did not include cases that proceeded to the adjudication process owing to a finding of reasonable cause to avoid surveying complainants that may still be involved in the adjudication process. The survey also provided supplemental evidence for our analysis of the thoroughness of the intake and investigation stages of the complaint process and the frequency of conciliation. We determined that there was a population of 4,327 fair housing complaints (contact information was not provided for 6 of the original 4,333 cases) that had ended in administrative closure, conciliation without determination of cause, or a determination of no-cause for the 6-month period between July 1, 2004, and December 31, 2004. The complainants of record were mostly private individuals, but some were fair housing organizations acting on their own behalf or on that of one or more individuals. From a list obtained from HUD’s TEAPOTS database, the PA Consulting Group, a survey firm under contract to GAO, called the complainants of record selected in the sample. The total sample of 1,675 was parceled out in seven individual waves over the field period, in an attempt to use the smallest possible sample to achieve the quota of 575 completed interviews. The sample was allocated across six categories—two agency types by three closure types— to ensure that enough interviews were conducted in each category to allow statistically valid comparisons between them. (See table 4 for the distribution of the population, sample, responses, and response rates across these categories.) With this probability sample, each member of the population had a nonzero probability of being included, and that probability could be computed for any member. Each sampled complaint for which an interview was obtained was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected and those who were selected but did not respond to the survey. Beginning in early May 2005, GAO mailed letters notifying those sampled complainants with valid mailing addresses of the upcoming survey and encouraging them to participate. Calling to those complainants typically began several days after the letters were mailed. The advance letters also included an address correction form. Recipients were asked to revise any incorrect information and return the form. A toll-free number also was provided for recipients to ask any questions or to correct information such as names and phone numbers. Calling began in early May and continued for 7 weeks, ending on June 20, 2005. For institutional complaints drawn into our sample, the interviewer helped the organization’s representative identify the specific complaint by describing the issue, basis, and respondent name. Multiple interviews could be conducted with the same institutional informant if more than one complaint from that organization was randomly drawn. Thirty-three interviews were completed on institutional complaints. Proxies (typically family members, guardians, or other representatives) for the named complainant were interviewed for 19 of the sampled complaints. Not all sampled complaints met GAO’s eligibility requirements for the survey: Some complainants indicated that their cases were still open and experiencing legal activity. GAO could not have any involvement with such complaints. For the same reason, other complainants who said they had an ongoing agreement with the other party resolving the complaint were also not interviewed. Some complaint cases also became ineligible for the survey due to the death of the complainant, a complainant’s insistence that a case was not a fair housing discrimination case, and complaints sampled multiple times (more than four for institutional complainants and one for individuals—see table 3). Complainants with an issue or basis alleging racial discrimination because the complainant was Hispanic received letters printed in both English and Spanish. The survey was also administered by Spanish-speaking interviewers when the complainant indicated a preference for speaking Spanish. When named complainants could not be found using records provided by HUD, interviewers used a variety of search techniques to try to locate complainants, including calling alternate contacts named in the complaint record and using directory assistance and online tracking services. For example, if an address was available but no working phone number could be found, interviewers used reverse directories and contacted neighbors by phone to ask about the whereabouts of the named complainant. Once during the fieldwork period, 547 noncontactable records were submitted to Lorton Data’s National Change of Address and Telephone Append services, resulting in some updated phone numbers and mailing addresses. To maximize the possibility of reaching complainants, multiple attempts were made over a period of time on different days of the week and at different hours. The number of attempts required to reach subjects ranged from 1 to 34, with an average of 12. Results from sample questionnaire surveys are subject to several types of errors: failure of the sample frame to cover the study population, measurement errors in administering the questionnaire, sampling errors, nonresponse errors from failing to collect information from part of the sample, and data processing error. To limit coverage errors, we used the most recent available data from TEAPOTS to identify eligible complainants. At the beginning of the interview, we also confirmed that the complainant of record had lodged an actual fair housing discrimination complaint and that the complaint had been closed as TEAPOTS indicated. To limit measurement errors, we first took steps in the development of the questionnaire to ensure that our questions gathered the information intended. GAO asked knowledgeable representatives of fair housing organizations and other government agencies to review early versions of the instrument. We also conducted a pilot test of an early version of the questionnaire with 26 complainants in December 2004 and seven pretests with complainants in the study population during March 2005. Finally, the telephone survey contractor completed nine pretests of the final instrument in late April 2005. Interviewers were trained using materials developed by GAO before the survey began and were routinely monitored during interviews. The survey is also subject to sampling error—our results have confidence intervals of plus or minus 6 percentage points or smaller at a 95 percent level of confidence. Our survey received a low response rate, with only 38 percent of those known or assumed to be eligible in our survey participating. If those who did not respond might have answered our survey questions differently from those who did, our estimates would be biased because we would have missed the answers from a set of people with fundamentally different views. In fact, response rates varied widely across the three different closure types, which were associated with key variables in the survey such as satisfaction with the complaint process and outcome. We tended to get relatively more responses from complainants in conciliation cases than from complainants in no-cause cases, and those whose cases were conciliated tended to be more satisfied. However, we could address this potential bias because we controlled the allocation of our sample across the closure types. We could statistically adjust, or weight, responses by closure type to bring them into proportion with the population and thus account for the different nonresponse rate across those types, which should compensate for the nonresponse bias. Nevertheless, the possibility of bias in the results still remains. Our weighting adjustment only compensates for differences in opinion between those with different closure types and agency responsibility, not other characteristics that may have been over- or under-represented in our responses and that may be related to our survey questions. To limit the possibility of data processing errors, the survey firm used a computer-assisted telephone interviewing system that recorded electronic data directly from the telephone interviewers’ answers, and also checked for missing data, inconsistencies, and unlikely answer patterns. Data analysis programming was also independently verified. In addition to Mathew J. Scirè (Assistant Director), Nicholas Alexander, Carl Barden, Johnnie Barnes, Bernice Benta, Emily Chalmers, Arielle Cohen, Paul Desaulniers, Grace Haskins, Robert Lowthian, Alexandra Martin-Arseneau, Amanda Miller, Marc Molino, Jeff Pokras, Linda Rego, Carl Ramirez, Beverly Ross, Paige Smith, Anita Visser, and Joan Vogel made key contributions to this report. | Each year, the Department of Housing and Urban Development's (HUD) Office of Fair Housing and Equal Opportunity (FHEO) and related state and local Fair Housing Assistance Program (FHAP) agencies receive and investigate several thousand complaints of housing discrimination. These activities, including required conciliation attempts, are directed by HUD's standards, which are based on law, regulation, and best practices. GAO's 2004 report examining trends in case outcomes raised questions about the quality and consistency of the intake (the receipt of initial inquiries) and investigation processes. This follow-up report assesses the thoroughness of fair housing intake and investigation (including conciliation) processes, and complainant satisfaction with the process. Evidence from several sources raises questions about the timeliness and thoroughness of the intake process. Thirty percent of complainants GAO surveyed noted that it was either somewhat or very difficult to reach a live person the first time they contacted a fair housing agency. GAO experienced similar difficulty in test calls it made to each of the 10 FHEO and 36 state FHAP agency intake centers. For example, 5 locations did not respond to the test calls. Further, FHEO and FHAP agencies do not consistently record in their automated information system contacts they receive that they consider potential fair housing inquiries and timeliness data are unreliable, limiting the system's effectiveness as a management control. GAO's review of a national random sample of 197 investigative case files for investigations completed within the last 6 months of 2004 found varying levels of documentation that FHEO and FHAP investigators met investigative standards and followed recommended procedures. Further, though the Fair Housing Act requires that agencies always attempt conciliation to the extent feasible, only about a third of the files showed evidence of such attempts. FHEO officials stated that the required investigation and conciliation actions may have been taken but not documented as required in case files. According to GAO's survey of a national random sample of 575 complainants whose complaint investigations were recently completed, about half were either somewhat or very dissatisfied with the outcome of the fair housing complaint process, and almost 40 percent would be unlikely to file a complaint in the future. Although GAO and survey respondents found that FHEO and FHAP agency staff were generally courteous and helpful, important lapses remain in the complaint process that may affect not only how complainants feel about the process but also how thoroughly and promptly their cases are handled. |
In May 2007, the Army issued a solicitation for body armor designs to replenish stocks and to protect against future threats by developing the next generation (X level) of protection. According to Army officials, the solicitation would result in contracts that the Army would use for sustainment of protective plate stocks for troops in Iraq and Afghanistan. The indefinite delivery/indefinite quantity contracts require the Army to purchase a minimum of 500 sets per design and allow for a maximum purchase of 1.2 million sets over the 5-year period. The Army’s solicitation, which closed in February 2008, called for preliminary design models in four categories of body armor protective plates: Enhanced Small Arms Protective Insert (ESAPI)—plates designed to same protection specifications as those currently fielded and to fit into currently fielded Outer Tactical Vests. Flexible Small Arms Protective Vest-Enhanced (FSAPV-E)—flexible armor system designed to same protection specifications as armor currently fielded. Small Arms Protective Insert-X level (XSAPI)—next-generation plates designed to defeat higher level threat. Flexible Small Arms Protective Vest-X level (FSAPV-X)—flexible armor system designed to defeat higher level threat. In figure 1, we show the ESAPI plates inside the Outer Tactical Vest. Between May of 2007 and February of 2008 the Army established testing protocols, closed the solicitation, and provided separate live-fire demonstrations of the testing process to vendors who submitted items for testing and to government officials overseeing the testing. Preliminary Design Model testing was conducted at Aberdeen Test Center between February 2008 and June 2008 at an estimated cost of $3 million. Additionally, over $6 million was spent on infrastructure and equipment improvements at Aberdeen Test Center to support future light armor test range requirements, including body armor testing. First Article Testing was then conducted at Aberdeen Test Center from November 10, 2008, to December 17, 2008, on the three ESAPI and five XSAPI designs that had passed Preliminary Design Model testing. First Article Testing is performed in accordance with the Federal Acquisition Regulation to ensure that the contractor can furnish a product that conforms to all contract requirements for acceptance. First Article Testing determines whether the proposed product design conforms to contract requirements before or in the initial stage of production. During First Article Testing, the proposed design is evaluated to determine the probability of consistently demonstrating satisfactory performance and the ability to meet or exceed evaluation criteria specified in the purchase description. Successful First Article Testing certifies a specific design configuration and the manufacturing process used to produce the test articles. Failure of First Article Testing requires the contractor to examine the specific design configuration to determine the improvements needed to correct the performance of subsequent designs. Testing of the body armor currently fielded by the Army was conducted by private NIJ-certified testing facilities under the supervision of PEO Soldier. According to Army officials, not a single death can be attributed to this armor’s failing to provide the required level of protection for which it was designed. However, according to Army officials, one of the body armor manufacturers that had failed body armor testing in the past did not agree with the results of the testing and alleged that the testers tested that armor to higher–than–required standards. The manufacturer alleged a bias against its design and argued that its design was superior to currently fielded armor. As a result of these allegations and in response to congressional interest, after the June 2007 House Armed Services Committee hearing, the Army accelerated completion of the light armor ranges to rebuild small arms ballistic testing capabilities at Aberdeen Test Center and to conduct testing under the May 2007 body armor solicitation there, without officials from PEO Soldier supervising the testing. Furthermore, the decision was made to allow Aberdeen Test Center, which is not an NIJ-certified facility, to be allowed to conduct the repeated First Article Testing. In February 2009 the Army directed that all future body armor testing be performed at Aberdeen Test Center. According to Army officials, as of this date, none of the body armor procured under the May 2007 solicitation had been fielded. Given the significant congressional interest in the testing for this solicitation and that these were the first small arms ballistic tests conducted at Aberdeen Test Center in years, multiple defense organizations were involved in the Preliminary Design Model testing. These entities include the Aberdeen Test Center, which conducted the testing; PEO Soldier, which provided the technical subject-matter experts; and DOD’s office of the Director of Operational Test and Evaluation, which combined to form the Integrated Product The Integrated Product Team was responsible for developing and approving the test plans used for the Preliminary Design Model testing and First Article Testing. Figure 2 shows a timeline of key Preliminary Design Model testing and First Article Testing events. The test procedures to be followed for Preliminary Design Model testing were established and identified in the purchase descriptions accompanying the solicitation announcement and in the Army’s detailed test plans (for each of the four design categories), which served as guidance to Army testers and were developed by the Army Test and Evaluation Command and approved by PEO-Soldier, DOD’s office of the Director of Operational Test and Evaluation, and others. Originally, PEO Soldier required that testing be conducted at an NIJ-certified facility. Subsequently, the decision was made to conduct testing at Aberdeen Test Center, which is not NIJ-certified. The test procedures for both Preliminary Design Model testing and First Article Testing included both (1) physical characterization steps performed on each armor design to ensure they met required specifications, which included measuring weight, thickness, curvature, and size and (2) ballistic testing performed on each design. Ballistics testing for this solicitation included the following subtests: (1) ambient testing to determine whether the designs can defeat the multiple threats assigned in the respective solicitation’s purchase descriptions 100 percent of the time; (2) environmental testing of the designs to determine whether they can defeat each threat 100 percent of the time after being exposed to nine different environmental conditions; and (3) testing, called V50 testing, to determine whether designs can defeat each threat at velocities significantly higher than those present or expected in Iraq or Afghanistan at least 50 percent of the time. Ambient and environmental testing seek to determine whether designs can defeat each threat 100 percent of the time by both prohibiting the bullet from penetrating through the plate and by prohibiting the bullet from causing too deep of an indentation in the clay backing behind the plate. Preventing a penetration is important because it prevents a bullet from entering the body of the soldier. Preventing a deep indentation in the clay (called “back-face deformation”) is important because the depth of the indentation indicates the amount of blunt force trauma to the soldier. Back-face deformation deeper than 43 millimeters puts the soldier at higher risk of internal injury and death. The major steps taken in conducting a ballistic subtest include: 1. For environmental subtests, the plate is exposed to the environmental condition tested (e.g., impact test, fluid soaks, temperature extremes, etc.). 2. The clay to be used to back the plate is formed into a mold and is placed in a conditioning chamber for at least 3 hours. 3. The test plate is placed inside of a shoot pack. 4. The clay is taken out of the conditioning chamber. It is then tested to determine if it is suitable for use and, if so, is placed behind the test plate. 5. The armor and clay are then mounted to a platform and shot. 6. If the shot was fired within required specifications, the plate is examined to determine if there is a complete or partial penetration, and the back-face deformation is measured. 7. The penetration result and back-face deformation are scored as a pass, a limited failure, or a catastrophic failure. If the test is not conducted according to the testing protocols, it is scored as a no-test. Following are significant steps the Army took to run a controlled test and maintain consistency throughout Preliminary Design Model testing: The Army developed testing protocols for the hard-plate (ESAPI and XSAPI) and flexible-armor (FSAPV-E and FSAPV-X) preliminary design model categories in 2007. These testing protocols were specified in newly created purchase descriptions, detailed test plans, and other documents. For each of the four preliminary design model categories, the Army developed new purchase descriptions to cover both hard-plate and flexible designs. These purchase descriptions listed the detailed requirements for each category of body armor in the solicitation issued by the Army. Based on these purchase descriptions, the Army developed detailed test plans for each of the four categories of body armor. These detailed test plans provided additional details on how to conduct testing and provided Army testers with the requirements that each design needed to pass. After these testing protocols were developed, Army testers then conducted a pilot test in which they practiced test activities in preparation for Preliminary Design Model testing, to help them better learn and understand the testing protocols. The Army consistently documented many testing activities by using audio, video, and other electronic means. The use of cameras and microphones to provide 24-hour video and audio surveillance of all of the major Preliminary Design Model testing activities provided additional transparency into many testing methods used and allowed for enhanced oversight by Army management, who are unable to directly observe the lanes on a regular basis but who wished to view select portions of the testing. The Army utilized an electronic database to maintain a comprehensive set of documentation for all testing activities. This electronic database included a series of data reports and pictures for each design including: physical characterization records, X-ray pictures, pre- and post-shot pictures, ballistics testing results, and details on the condition of the clay backing used for the testing of those plates. The Army took a number of additional actions to promote a consistent and unbiased test. For example, the Army disguised vendor identity for each type of solution by identifying vendors with random numbers to create a blind test. The Army further reduced potential testing variance by shooting subtests in the same shooting lane. The Army also made a good faith effort to use consistent and controlled procedures to measure the weight, thickness, and curvature of the plates. Additionally, the Army made extensive efforts to consistently measure and maintain room temperature and humidity within desired ranges. We also observed that projectile yaw was consistently monitored and maintained. We also found no deviations in the monitoring of velocities for each shot and the re-testing of plates in cases where velocities were not within the required specifications. We observed no instances of specific bias against any design, nor did we observe any instances in which a particular vendor was singled out for advantage or disadvantage. We identified several instances in which the Aberdeen Test Center did not follow established testing protocols. For example, during V50 testing, testers failed to properly adjust shot velocities. V50 testing is conducted to discern the velocity at which 50 percent of the shots of a particular threat would penetrate each of the body armor designs. The testing protocols require that after every shot that is defeated by the body armor the velocity of the next shot be increased. Whenever a shot penetrates the armor, the velocity should be decreased for the next shot. This increasing and decreasing of the velocities is supposed to be repeated until testers determine the velocity at which 50 percent of the shots will penetrate. In cases in which the armor far exceeds the V50 requirements and is able to defeat the threat for the first six shots, the testing may be halted without discerning the V50 for the plate, and the plate is ruled as passing the requirements. During Preliminary Design Model testing, in cases in which plates defeated the first three shots, Army testers failed to increase shot velocities, but rather continued to shoot at approximately the same velocity or lower for shots four, five, and six in order to obtain six partial penetrations and conclude the test early. Army officials told us that this deviation was implemented by Aberdeen Test Center to conserve plates for other tests that needed repeating as a result of no-test events, according to Aberdeen Test Center officials—but was a practice not described in the protocols. Army officials told us that this practice had no effect on which designs passed or failed; however, this practice made it impossible to discern the true V50s for these designs and was a deviation from the testing protocols that require testers to increase velocities for shots after the armor defeats the threat. In another example, Aberdeen Test Center testers did not consistently follow testing protocols in the ease-of-insertion test. According to the testing protocols, one barehanded person shall demonstrate insertion and removal of the ESAPI/XSAPI plates in the Outer Tactical Vest pockets without tools or special aids. Rather than testing the insertion of both the front and the rear pockets as required, testers only tested the ability to insert into the front pocket. Testing officials told us that they did not test the ability to insert the plates into the rear pocket because they were unable to reach the rear pocket while wearing the Outer Tactical Vest. The cause for this deviation is that the testers misinterpreted the testing protocols, as there is no requirement in the established testing protocols to wear the Outer Tactical Vest when testing the ability to insert the plates in the rear pocket of the Outer Tactical Vest. Officials from PEO Soldier told us that, had they been present to observe this deviation during testing, they would have informed testers that the insertion test does not require that the Outer Tactical Vest be worn, which would have resulted in testers conducting the insertion test as required. According to Aberdeen Test Center officials, this violation of the testing protocols had no impact on test results. While we did not independently verify this assertion, Aberdeen Test Center officials told us that the precise physical characterization measurements of the plate’s width and dimensions are, alone, sufficient to ensure the plate will fit. In addition, testers deviated from the testing protocols by placing shots at the wrong location on the plate. The testing protocols require that the second shot for one of the environmental sub-tests, called the impact test, be taken approximately 1.5 inches from the edge of the armor. However, testers mistakenly aimed closer to the edge of the armor for some of the designs tested. Army officials said that the testing protocols were unclear for this test because they did not prescribe a specific hit zone (e.g., 1.25 – 1.75 inches), but rather relied upon testers’ judgment to discern the meaning of the word “approximately.” One of the PEO Soldier technical advisors on the Integrated Product Team told us he was contacted by the Test Director after the plates had been shot and asked about the shot location. He told us that he informed the Test Director that the plates had been shot in the wrong location. The PEO Soldier Technical advisor told us that, had he been asked about the shot location before the testing was conducted, he could have instructed testers on the correct location at which to shoot. For 17 of the 47 total designs that we observed and measured, testers marked target zones that were less than the required 1.5 inches from the plate’s edge, ranging from .75 inches to 1.25 inches from the edge. Because 1.5 inches was outside of the marked aim area for these plates, we concluded that testers were not aiming for 1.5 inches. For the remaining 30 designs tested that we observed and measured, testers used a range that included 1.5 inches from the edge (for example, aiming for 1 to 1.5 inches). It is not clear what, if any, effect this deviation had on the overall test results. While no design failed Preliminary Design Model testing due to the results of this subtest, there is no way to determine if a passing design would have instead failed if the testing protocol had been correctly followed. However, all designs that passed this testing were later subject to First Article Testing, where these tests were repeated in full using the correct shot locations. Of potentially greater consequence to the final test results is our observation of deviations from testing protocols regarding the clay calibration tests. According to testing protocols, the calibration of the clay backing material was supposed to be accomplished through a series of pre-test drops. The depths of the pre-test drops should have been between 22 and 28 millimeters. Aberdeen Test Center officials told us that during Preliminary Design Model testing they did not follow a consistent system to determine if the clay was conditioned correctly. According to Aberdeen Test Center officials, in cases in which pre-test drops were outside the 22- to 28-millimeter range, testers would sometimes repeat one or all of the drops until the results were within range—thus resulting in the use of clay backing materials that should have been deemed unacceptable for use. These inconsistencies occurred because Army testers in each test lane made their own, sometimes incorrect, interpretation of the testing protocols. Members of the Integrated Product Team expressed concerns about these inconsistencies after they found out how calibrations were being conducted. In our conversations with Army and private body armor testing officials, consistent treatment and testing of clay was identified as critical to ensure consistent, accurate testing. According to those officials if the clay is not conditioned correctly it will impact the test results. Given that clay was used during Preliminary Design Model testing that failed the clay calibration tests, it is possible that some shots may have been taken under test conditions different than those stated in the testing protocols, potentially impacting test results. Figure 3 shows an Army tester calibrating the clay with pre-test drops. The most consequential of the deviations from testing protocols we observed involved the measurement of back-face deformation, which did affect final test results. According to testing protocol, back-face deformation is to be measured at the deepest point of the depression in the clay backing. This measure indicates the most force that the armor will allow to be exerted on an individual struck by a bullet. According to A rmy officials, the deeper the back-face deformation measured in the cla backing, the higher the risk of internal injury or death. During approximately the first one-third of testing, however, Army testers incorrectly measured deformation at the point of aim, rather than at the deepest point of depression. This is significant because, in many instances, measuring back-face deformation at the point of aim results in measuri ng at a point upon which less ballistic force is exerted, resulting in lower back-face deformation measurements and overestimating the effectiveness of the armor. The Army’s subject matter experts on the Integrated Product Team were not on the test lanes during testing and thus not made aware of the error until approximately one-third of the testing had been completed. When members of the Integrated Product Team overseeing the testing were made aware of this error, the Integrated Product Team decided to begin measuring at the deepest point of depression. When senior Army leadership was made aware of this error, testing was halted for 2 weeks while Army leadership considered the situation. Army leadership developed many courses of action, including restarting the entire Preliminary Design Model testing with new armor plate submissions, but ultimately decided to continue measuring and scoring officially at the point of aim, since this would not disadvantage any vendors. The Army then changed the test plans and modified the contract solicitation to call for measuring at the point of aim. The Army also decided to collect deepest point of depression measurements for all shots from that point forward, but only as a government reference. During the second two-thirds of testing, we observed significant differences between the measurements taken at the point of aim and those taken at the deepest point, as much as a 10-millimeter difference between measurements. As a result, at least two of the eight designs that passed Preliminary Design Model testing and were awarded contracts would have failed if the deepest point of depression measurement had been used. Figures 4 and 5 illustrate the difference between the point of aim and the deepest point. Before Preliminary Design Model testing began at Aberdeen Test Center, officials told us that Preliminary Design Model testing was specifically designed to meet all the requirements of First Article Testing. However, Preliminary Design Model testing failed to meet its goal of determining which designs met requirements, because of the deviations from established testing protocols described earlier in this report. Those deviations were not reviewed or approved by officials from PEO Soldier, the office of the Director of Operational Test and Evaluation, or by the Integrated Product Team charged with overseeing the test. PEO Soldier officials told us that the reason for a lack of PEO Soldier on-site presence during this testing was because of a deliberate decision made by PEO Soldier management to be as removed from the testing process as possible in order to maximize the independence of the Aberdeen Test Center. PEO Soldier officials told us that it was important to demonstrate the independence of the Aberdeen Test Center to quash allegations of bias made by a vendor whose design had failed prior testing and that this choice may have contributed to some of the deviations not being identified by the Army earlier during testing. After the conclusion of Preliminary Design Model testing, PEO Soldier officials told us that they should have been more involved in the testing and that they would be more involved in future testing. After the completion of Preliminary Design Model testing, the Commanding General of PEO Soldier said that, as the Milestone Decision Authority for the program, he elected to repeat the testing conducted during Preliminary Design Model testing through First Article Testing before any body armor was fielded based on the solicitation. According to PEO Soldier officials, at the beginning of Preliminary Design Model testing, there was no intention or plan to conduct First Article Testing following contract awards given that the Preliminary Design Model testing was to follow the First Article Testing protocol. However, because of the fact that back-face deformation was not measured to the deepest point, PEO-Soldier and Army Test and Evaluation and Command acknowledged that there was no longer an option of forgoing First Article Testing. PEO Soldier also expressed concerns that Aberdeen Test Center test facilities have not yet demonstrated that they are able to test to the same level as NIJ-certified facilities. However, officials from Army Test and Evaluation Command and DOD’s office of the Director of Operational Test and Evaluation asserted that Aberdeen Test Center was just as capable as NIJ- certified laboratories, and Army leadership eventually decided that First Article Testing would be performed at Aberdeen. PEO Soldier maintained an on-site presence in the test lanes and the Army technical experts on the Integrated Product Team charged with testing oversight resolved the following problems during First Article Testing: The Army adjusted its testing protocols to clarify the required shot location for the impact test, and Army testers correctly placed these shots as required by the protocols. After the first few days of First Article Testing, in accordance with testing protocols, Army testers began to increase the velocity after every shot defeated by the armor required during V50 testing. As required by the testing protocols, Army testers conducted the ease- of-insertion tests for both the front and rear pockets of the outer protective vest, ensuring that the protective plates would properly fit in both pockets. The Army began to address the problems identified during Preliminary Design Model testing with the clay calibration tests and back-face deformation measurements. Army testers said they developed an informal set of procedures to determine when to repeat failed clay calibration tests. The procedures, which were not documented, called for repeating the entire series of clay calibration drops if one of the calibration drops showed a failure. If the clay passes either the first or second test, the clay is to be used in testing. If the clay fails both the first and the second series of drops, the clay is to then be placed back in conditioning and testers get a new block of clay. With respect to back-face deformation measurements, Army testers measured back-face deformation at the deepest point, rather than at the point of aim. Although the Army began to address problems relating to the clay calibration tests and back-face deformation measurements, Army testers still did not follow all established testing protocols in these areas. As a result, the Army may not have achieved the objective of First Article Testing—to determine if the designs tested met the minimum requirements for ballistic protection. First, the orally agreed-upon procedures used by Army testers to conduct the clay calibration tests were inconsistent with the established testing protocols. Second, with respect to back-face deformation measurements, Army testers rounded back-face deformation measurements to the nearest millimeter, a practice that was neither articulated in the testing protocols nor consistent with Preliminary Design Model testing. Third, also with respect to back-face deformation measurements, Army testers introduced a new, unproven measuring device. Although Army testers told us that they had orally agreed upon an informal set of procedures to determine when to repeat failed clay calibration tests, those procedures are inconsistent with the established testing protocols. The Army deviated from established testing protocols by using clay that had failed the calibration test as prescribed by the testing protocols. The testing protocols specify that a series of three pre-test drops of a weight on the clay must be within specified tolerances before the clay is used. However, in several instances, the Army repeated the calibration test on the same block of clay after it had initially failed until the results of a subsequent series of three drops were within the required specifications. Army officials told us that the testing protocols do not specify what procedures should be performed when the clay does not pass the first series of calibration drops, so Army officials stated they developed the procedure they followed internally prior to First Article Testing and provided oral guidance on those procedures to all test operators to ensure a consistent process. Officials we spoke with from the Army, private NIJ-certified laboratories, and industry had mixed opinions regarding the practice of re-testing failed clay, with some expressing concerns that performing a second series of calibration drops on clay that had failed might introduce risk that the clay may not be at the proper consistency for testing because as the clay rests it cools unevenly, which could affect the calibration. Aberdeen Test Center’s Test Operating Procedure states that clay should be conditioned so that the clay passes the clay calibration test, and Army officials, body armor testers from private laboratories, and body armor manufacturers we spoke to agreed that when clay fails the calibration test, this requires re-evaluation and sometimes adjustment of the clay calibration procedures used. After several clay blocks failed the clay calibration test on November 13, 2008, Army testers recognized that the clay conditioning process used was yielding clay that was not ideal and, as a result, Army testers adjusted their clay conditioning process by lowering the temperature at which the clay was stored. On that same day of testing, November 13, 2008, we observed heavy, cold rain falling on the clay blocks that were being transported to test lanes. These clay blocks had been conditioned that day in ovens located outside of the test ranges at temperatures above 100 degrees Fahrenheit to prepare them for testing, and then were transported outside uncovered on a cold November day through heavy rain on the way to the temperature- and humidity-controlled test lane. We observed an abnormally high level of clay blocks failing the clay calibration test and a significantly higher-than- normal level of failure rates for the plates tested on that day. The only significant variation in the test environment we observed that day was constant heavy rain throughout the day. Our analysis of test data also showed that 44 percent (4 of 9) of the first shots and 89 percent (8 of 9) of the second shots taken on November 13, 2008, resulted in failure penalties. On all of the other days of testing only 14 percent (10 of 74) of the first shots and 42 percent (31 of 74) of the second shots resulted in failure penalties. Both of these differences are statistically significant, and we believe the differences in the results may be attributable to the different test condition on that day. The established testing protocols require the use of a specific type of non-hardening oil-based clay. Body armor testers from NIJ-certified private laboratories, Army officials experienced in the testing of body armor, body armor manufacturers, and the clay manufacturer we spoke with said that the clay used for testing is a type of sculpting clay that naturally softens when heat is added and that getting water on the clay backing material could cause a chemical bonding change on the clay surface. Those we spoke with further stated that the cold water could additionally cause the outside of the clay to cool significantly more rapidly than the inside causing the top layer of clay to be harder than the middle. They suggested that clay be conditioned inside the test lanes and said that clay exposed to water or extreme temperature changes should not be used. Army Test and Evaluation Command officials we spoke with said that there is no prohibition in the testing protocols on allowing rain to fall onto the clay backing material and that its exposure to water would not impact testing. However, these officials were unable to provide data to validate their assertion that exposure to water would not affect the clay used during testing or the testing results. Army test officials also said that, since the conclusion of First Article Testing, Aberdeen Test Center has procured ovens to allow clay to be stored inside test lanes, rather than requiring that the clay be transported from another room where it would be exposed to environmental conditions, such as rain. With respect to the issue of the rounding of back-face deformation measurements, during First Article Testing Army testers did not award penalty points for shots with back-face deformations between 43.0 and 43.5 millimeters. This was because the Army decided to round back-face deformation measurements to the nearest millimeter—a practice that is inconsistent with the Army’s established testing protocols, which require that back-face deformation measurements in the clay backing not exceed 43 millimeters and that is inconsistent with procedures followed during Preliminary Design Model testing. Army officials said that a decision to round the measurements for First Article Testing was made to reflect testing for past Army contract solicitations and common industry practices of recording measurements to the nearest millimeter. While we did not validate this assertion that rounding was a common industry practice, one private industry ballistics testing facility said that its practice was to always round results up, not down, which has the same effect as not rounding at all. Army officials further stated that they should have also rounded Preliminary Design Model results but did not realize this until March 2008—several weeks into Preliminary Design Model testing—and wanted to maintain consistency throughout Preliminary Design Model testing. The Army’s decision to round measurement results had a significant outcome on testing because two designs that passed First Article Testing would have instead failed if the measurements had not been rounded. With respect to the introduction of a new device to measure back-face deformation, the Army began to use a laser scanner to measure back-face deformation without adequately certifying that the scanner could measure against the standard established when the digital caliper was used as the measuring instrument. Although Army Test and Evaluation Command certified the laser scanner as accurate for measuring back-face deformation, we observed the following certification issues: The laser was certified based on testing done in a controlled laboratory environment that is not similar to the actual conditions on the test lanes. For example, according to the manufacturer of the laser scanner, the scanner is operable in areas of vibration provided the area scanned and the scanning-arm are on the same plane or surface. This was not the case during testing, and thus it is possible the impact of the bullets fired may have thrown the scanner out of alignment or calibration. The certification is to a lower level of accuracy than required by the testing protocols. The certification study says that the laser is accurate to 0.2 millimeters; however, the testing protocols require an accuracy of 0.1 millimeters or better. Furthermore, the official letter from the Army Test and Evaluation Command certifying the laser for use incorrectly stated the laser meets an accuracy requirement of 1.0 millimeter rather than 0.1 millimeters as required by the protocols. Officials confirmed that this was not a typographical error. The laser certification was conducted before at least three major software upgrades were made to the laser, which according to Army officials may have significantly changed the accuracy of the laser. Because of the incorporation of the software upgrades, Army testers told us that they do not know the accuracy level of the laser as it was actually used in First Article Testing. In evaluating the use of the laser scanner, the Army did not compare the actual back-face deformation measurements taken by the laser with those taken by digital caliper, previously used during Preliminary Design Model testing and by NIJ-certified laboratories. According to vendor officials and Army subject matter experts, the limited data they had previously collected have shown that back-face deformation measurements taken by laser have generally been deeper by about 2 millimeters than those taken by digital caliper. Given those preliminary findings, there is a significant risk that measurements taken by the laser may represent a significant change in test requirements. Although Army testing officials acknowledged that they were unable to estimate the exact accuracy of the laser scanner as it was actually used during testing, they believed that based on the results of the certification study, it was suitable for measuring back-face deformation. These test officials further stated that they initially decided to use the laser because they did not believe it was possible to measure back-face deformations to the required level of accuracy using the digital caliper. However, officials from PEO Soldier and private NIJ-certified laboratories have told us that they believe the digital caliper method is capable of making these measurements with the required level of accuracy and have been using this technique successfully for several years. PEO Soldier officials also noted that the back-face deformation measurements in the testing protocols were developed using this digital caliper method. Army testing officials noted that the laser certification study confirmed their views that the laser method was more accurate than the digital caliper. However, because of the problems with the study that we have noted in this report, it is still unclear whether the laser is the most appropriate and accurate technique for measuring back-face deformation. Although we did not observe problems in the Army’s determination of penetration results during Preliminary Design Model testing, during First Article Testing we observed that the Army did not consistently follow its testing protocols in determining whether a shot was a partial or a complete penetration. Army testing protocols require that penalty points be awarded when any fragment of the armor material is imbedded or passes into the soft under garment used behind the plate; however, the Army did not score the penetration of small debris through a plate as a complete penetration of the plate in at least one case that we observed. In this instance, we observed small fragments from the armor three layers deep inside the Kevlar backing behind the plate. This shot should have resulted in the armor’s receiving 1.5 penalty points, which would have caused the design to fail First Article Testing. Army officials said that testers counted the shot as only a partial penetration of the plate because it was determined that fibers of the Kevlar backing placed behind the plate were not broken, which they stated was a requirement for the shot to be counted as a complete penetration of the plate. This determination was made with the agreement of an Army subject-matter expert from PEO- Soldier present on the lane. However, the requirement for broken fibers is inconsistent with the written testing protocols. Army officials acknowledged that the requirement for broken fibers was not described in the testing protocols or otherwise documented but said that Army testers discussed this before First Article Testing began. Figure 6 shows the tear in the fibers of the rear of the plate in question. Federal internal control standards require that federal agencies maintain effective controls over information processing to help ensure completeness, accuracy, authorization, and validity of all transactions. However, the Army did not consistently maintain adequate internal controls to ensure the integrity and reliability of its test data. For example, in one case bullet velocity data were lost because the lane Test Director accidentally pressed the delete button on the keyboard, requiring a test to be repeated. Additionally, we noticed that the software being used with the laser scanner to calculate back-face deformation measurements lacked effective edit controls, which could potentially allow critical variables to be inappropriately modified during testing. We further observed a few cases in which testers attempted to memorize test data for periods of time, rather than writing that data down immediately. In at least one case, this practice resulted in the wrong data being reported and entered into the test records. According to Army officials, decisions to implement those procedures that deviated from testing protocols were reviewed and approved by appropriate officials. However, these decisions were not formally documented, the testing protocols were not modified to reflect the changes, and vendors were not informed of the procedures. At the beginning of testing, the Director of Testing said that any change to the testing protocols has to be approved by several Army components; however, the Army was unable to produce any written documentation indicating approval of the deviations we observed by those components. With respect to internal control issues, Army officials acknowledged that before our review they were unaware of the specific internal control problems we identified. We noted during our review that in industry, as part of the NIJ certification process, an external peer review process is used to evaluate testing processes and procedures of ballistics testing facilities to ensure that effective internal controls are in place. However, we found that the Aberdeen Test Center has conducted no such reviews, a contributing factor to the Army’s lack of unawareness of the control problems we noted. As a result of the deviations from testing protocols that we observed, three of the five designs that passed First Article Testing would not have passed under the existing testing protocols. Furthermore, one of the remaining two designs that passed First Article Testing was a design that would have failed Preliminary Design Model testing if back-face deformation was measured in accordance with the established protocols for that test. Thus, four of the five designs that passed First Article Testing and were certified by the Army as ready for full production would have instead failed testing at some point during the process, either during the initial Preliminary Design Model testing or the subsequent First Article testing, if all the established testing protocols had been followed. As a result, the overall reliability and repeatability of the test results are uncertain. However, because ballistics experts from the Army or elsewhere have not assessed the impact of the deviations from the testing protocols we observed during First Article Testing, it is not certain whether the effect of these deviations is sufficient to call into question the ability of the armor to meet mission requirements. Although it is certain that some armor passed testing that would not have if specific testing protocols had been followed, it is unclear if there are additional factors that would mean the armor still meets the required performance specifications. For example, the fact that the laser scanner used to measure back-face deformation may not be as accurate as what the protocol requires may offset the effects of rounding down back-face deformations. Likewise, it is possible that some of the deviations that did not on their own have a visible effect on testing results could, when taken together with other deviations, have a combined effect that is greater. In our opinion, given the significant deviations in the testing protocols, independent ballistics testing expertise would be required to determine whether or not the body armor designs procured under this solicitation provide the required level of protection. The Army has ordered 2,500 sets of plates (at two plates per set) from those vendors whose designs passed First Article Testing to be used for additional ballistics testing and 120,000 sets of plates to be put into inventory to address future requirements. However, to date, none of these designs have been fielded because, according to Army officials, there are adequate quantities of armor plates produced under prior contracts already in the inventory to meet current requirements. Body armor plays a critical role in protecting our troops, and the testing inconsistencies we identified call into question the quality and effectiveness of testing performed at Aberdeen Test Center. Because we observed several instances in which actual test practices deviated from the established testing protocols, it is questionable whether the Army met its First Article Testing objectives of ensuring that armor designs fully met Army’s requirements before the armor is purchased and used in the field. While it is possible that the testing protocol deviations had no significant net effect or may have even resulted in armor being tested to a more rigorous standard, it is also possible that some deviations may have resulted in armor being evaluated against a less stringent standard than required. We were unable to determine the full effects of these deviations as they relate to the quality of the armor designs and believe such a determination should only be made based on a thorough assessment of the testing data by independent ballistics testing experts. In light of such uncertainty and the critical need for confidence in the equipment by the soldiers, the Army would take an unacceptable risk if it were to field these designs without taking additional steps to gain the needed confidence that the armor will perform as required. The Army is now moving forward with plans to conduct all future body armor testing at Aberdeen Test Center. Therefore, it is essential that the transparency and consistency of its program be improved by ensuring that all test practices fully align with established testing protocols and that any modifications in test procedures be fully reviewed and approved by the appropriate officials, with supporting documentation, and that the testing protocols be formally changed to reflect the revised or actual procedures. Additionally, it is imperative that all instrumentation, such as the laser scanner, used for testing be fully evaluated and certified to ensure its accuracy and applicability to body armor testing. Furthermore, it is essential that effective internal controls over data and testing processes be in place. The body armor industry has adopted the practice, through the NIJ certification program, of using external peer reviews to evaluate and improve private laboratories’ test procedures and controls. This type of independent peer review could be equally beneficial to the Aberdeen Test Center. Without all of these steps, there will continue to be uncertainty with regard to whether future testing data are repeatable and reliable and can be used to accurately evaluate body armor designs. Until Aberdeen Test Center has effectively honed its testing practices to eliminate the types of inconsistencies we observed, concerns will remain regarding the rigor of testing conducted at that facility. To determine what effect, if any, the problems we observed had on the test data and on the outcomes of First Article Testing, we recommend the Secretary of Defense direct the Secretary of the Army to provide for an independent evaluation of the First Article Testing results by ballistics and statistical experts external to DOD before any armor is fielded to soldiers under this contract solicitation and that the Army report the results of that assessment to the office of the Director of Operational Test and Evaluation and the Congress. In performing this evaluation, the independent experts should specifically evaluate the effects of the following practices observed during First Article Testing: the rounding of back-face deformation measurements; not scoring penetrations of material through the plate as a complete penetration unless broken fibers are observed in the Kevlar backing behind each plate; the use of the laser scanner to measure back-face deformations without a full evaluation of its accuracy as it was actually used during testing, to include the use of the software modifications and operation under actual test conditions; the exposure of the clay backing material to rain and other outside environmental conditions as well as the effect of high oven temperatures during storage and conditioning; and the use of an additional series of clay calibration drops when the first series of clay calibration drops does not pass required specifications. To better align actual test practices with established testing protocols during future body armor testing, we recommend that the Secretary of the Defense direct the Secretary of the Army to document all key decisions made to clarify or change the testing protocols. With respect to the specific inconsistencies we identified between the test practices and testing protocols, we recommend that the Secretary of the Army, based on the results of the independent expert review of the First Article Test results, take the following actions: Determine whether those practices that deviated from established testing protocols during First Article Testing will be continued during future testing and change the established testing protocols to reflect those revised practices. Evaluate and re-certify the accuracy of the laser scanner to the correct standard with all software modifications incorporated and include in this analysis a side-by-side comparison of the laser measurements of the actual back-face deformations with those taken by digital caliper to determine whether laser measurements can meet the standard of the testing protocols. To improve internal controls over the integrity and reliability of test data for future testing as well as provide for consistent test conditions and comparable data between tests, we recommend that the Secretary of Defense direct the Secretary of the Army to provide for an independent peer review of Aberdeen Test Center’s body armor testing protocols, facilities, and instrumentation to ensure that proper internal controls and sound management practices are in place. This peer review should be performed by testing experts external to the Army and DOD. DOD did not concur with our recommendation for an independent evaluation of First Article Testing results and accordingly plans to take no action to provide such an assessment. DOD asserted that the issues we identified do not alter the effects of testing. However, based on our analysis and findings there is sufficient evidence to raise questions as to whether the issues we identified had an impact on testing results. As a result, we continue to believe it is necessary to have an independent external expert review these test results and the overall effect of the testing deviations we observed on those results before any armor is fielded to military personnel. Without such an independent review, the First Article Test results remain questionable, undermining the confidence of the public and those who might rely on the armor for protection. Consequently, Congress should consider directing the Office of the Secretary of Defense to either require that an independent external review of these body armor test results be conducted or that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing to ensure that only properly tested designs are fielded. In written comments on a draft of this report, DOD takes the position that our findings had no significant impact on the test results and on the subsequent contracting actions taken by the Army. DOD also does not concur with what it perceives as our two overarching conclusions: (1) that Preliminary Design Model testing did not achieve its intended objective of determining, as a basis for contract awards, which designs met performance requirements and (2) that First Article Testing may not have met its objective of determining whether each of the contracted plate designs met performance requirements. DOD commented that it recognizes the importance of personal protection equipment such as body armor and provided several examples of actions DOD and the Army have taken to improve body armor testing. DOD generally concurred with our findings that there were deviations from the testing protocols during Preliminary Design Model testing and First Article Testing. We agree that DOD has taken positive steps to improve its body armor testing program and to address concerns raised by Congress and others. DOD also concurred with our second recommendation to document all key decisions made to clarify or change the testing protocols. DOD did not concur with our first recommendation that an independent evaluation of First Article Testing results be performed by independent ballistics and statistical experts before any of the armor is fielded to soldiers under contracts awarded under this solicitation. Similarly, DOD did not agree with our conclusions that Preliminary Design Model testing did not meet its intended objectives and that First Article Testing may not have met its intended objectives. In supporting its position, DOD cited, for example, that rounding back-face deformation measurements during First Article Testing was an acceptable test practice because rounding is a practice that has been used historically. It was the intent of PEO Soldier to round back- face deformations for all testing associated with this solicitation, and the Integrated Product Team decided collectively to round back-face deformations during First Article Testing. However, as stated in our report and acknowledged by DOD, the rounding down of back-face deformations was not spelled out or provided for by any of the testing protocol documents. Additionally, it created an inconsistency between Preliminary Design Model testing, where back-face deformations were not rounded down and in First Article Testing, where back-face deformations were rounded down. Of greatest consequence, rounding down back-face deformations lowered the requirements that solutions had to meet to pass testing. Two solutions passed First Article Testing because back-face deformations were rounded down, meaning that the Army may be taking unacceptable risk if plates are fielded without an additional, independent assessment by experts. DOD also did not agree with our finding that a penetration of a plate was improperly scored. DOD did agree that figure 6, which shows the tear in the Kevlar fibers of the rear of the plate in question, appears to show evidence of a perforation and that an Aberdeen Test Center ballistics subject matter expert found particles in the soft backing material behind the plate. Nevertheless, DOD did not concur with our finding because it asserted that no threads were broken on the first layer of Kevlar. However, as we stated in the report, the protocols define a complete penetration as having occurred when the projectile, fragment of the projectile, or fragment of the armor material is imbedded or passes into the soft under garment used behind the protective inserts plates, not when threads of the Kevlar are broken. The fragments found by the Aberdeen Test Center subject matter expert, as well as the three frayed, tattered, and separated Kevlar layers that we and Army testers observed, confirm our observations during testing. DOD also stated that the first layer of soft armor behind the plate under test serves as a witness plate during testing and if that first layer of soft armor is not penetrated, as determined by the breaking of threads on that first layer of soft armor, the test shot is not scored as a complete penetration in accordance with the PEO Soldier’s scoring criteria. We disagree with DOD’s position because the protocols do not require the use of a “witness plate” during testing to determine if a penetration occurred. If this shot would have been ruled a complete penetration rather than a partial penetration, this design would have accrued additional point deductions causing it to fail First Article Testing. DOD did not agree that the certification of the laser scanner was inadequate and made several statements in defense of both the laser and its certification. Among these is the fact that the laser removes the human factor of subjectively trying to find the deepest point, potentially pushing the caliper into the clay, and removing the need to use correction factors, all of which we agree may be positive things. However, we maintain that the certification of the laser was not adequately performed. As indicated in the certification letter, the laser was certified to a standard that did not meet the requirement of the testing protocols. Additionally, DOD stated that software modifications added to the laser after certification did not affect measurements; however, Army testers told us on multiple occasions that the modifications were designed to change the measurements reported by the laser. DOD added that the scanner does not artificially overstate back-face deformations and relies on the verified accuracy of the scanner and the study involving the scanning of clay replicas to support its claim. Based on our observations, the scanner was certified to the wrong standard and the certification study was not performed in the actual test environment using actual shots. DOD asserts that the scanner does not overstate back-face deformations and that it does not establish a new requirement. However, DOD cannot properly validate these assertions without a side-by-side comparison of the laser scanner and the digital caliper in their operational environment. Given the numerous issues regarding the laser and its certification, we maintain that its effect on First Article Testing should be examined by an external ballistics expert. DOD also stated that it did not agree with our finding that exposure of the clay backing to heavy rain on one day may have affected test results. DOD challenged our statistical analysis and offered its own statistical analysis as evidence that it was the poor designs themselves that caused unusual test results that day. We stand by our analysis, in combination with statements made by DOD and non-DOD officials with testing expertise and by the clay manufacturer, that exposure of the clay to constant, heavy cold rain may have had an effect on test results. Further, in analyzing the Army’s statistical analysis presented in DOD’s comments, we did not find this information to demonstrate that the designs were the factor in unusual test results that day or that the rain exposure could not have had an effect on the results. More detailed discussions of the Army’s analysis and our conclusions are provided in comments 13 and 24 of appendix II. DOD partially disagreed that the use of an additional series of clay calibration drops when the first series of drops were outside specifications did not meet First Article Test requirements and added that all clay used in testing passed the clay calibration in effect at the time. However, we witnessed several clay calibration drops that were not within specifications. These failed clay boxes were repaired, re-dropped, and either used if they passed the subsequent drop calibration series or discarded if they failed. The protocols only allow for one series of drops per clay box, which is the methodology that Army testers should have followed. DOD stated that NIJ standards do permit the repeating of failed calibration drops. However, our review of the NIJ standards reveals that there is no provision that allows repeat calibration drops. DOD states in its comments that NIJ standards are inappropriate for its test facilities, stating that these standards are insufficient for the U.S. Army given the expanded testing required to ensure body armor meets U.S. Army requirements. NIJ standards were not the subject of our review, but rather Aberdeen Test Center’s application of the Army’s current solicitation’s protocols during testing. Further, DOD acknowledged in its comments that National Institute of Standards and Technology officials recommended only one series of drops for clay calibration. However, DOD stated that it will partner with the National Institute of Standards and Technology to study procedures for clay calibration, to include repeated calibration attempts, and document any appropriate procedural changes, which we agree is a good step. Based on our analyses as described in our report and in our above responses to DOD’s comments, we believe there is sufficient evidence to raise questions as to whether the issues we identified had an impact on testing results. As a result, we continue to believe that it is necessary that DOD allow an independent external expert to review these test results and the overall effect of DOD’s deviations on those results before any armor is fielded to military personnel. Without such an independent review, it is our opinion that the First Article Testing results will remain questionable. Consequently, we have added a matter for congressional consideration to our report suggesting that Congress consider either directing DOD to require that an independent external review of these body armor test results be conducted or require that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing to ensure properly tested designs. DOD partially concurred with our third recommendation to determine whether those procedures that deviated from established testing protocols during First Article Testing should be continued during future testing and to change the established testing protocols to reflect those revised procedures. DOD recognized the need to update testing protocols and added that when the office of the Director of Operational Test and Evaluation promulgates standard testing protocols across DOD, these standards will address issues that we identified. As long as DOD specifically addresses all the inconsistencies and deviations that we observed prior to any future body armor testing, this would satisfy our recommendation. DOD stated that it partially concurs with our fourth recommendation to evaluate and recertify the accuracy of the laser scanner to the correct standard with all software modifications incorporated, based on the results of the independent expert review of the First Article Testing results. We also recommended that this process include a side-by-side comparison of the laser’s measurement of back-face deformations and those taken by digital caliper. DOD concurred with the concept of an independent evaluation, but it did not concur that one is needed in this situation because according to DOD its laser certification was sufficient. We disagree that the laser certification was performed correctly. As discussed in the body of our report and further in appendix II, recertification of the laser is critical because (1) the laser was certified to the wrong standard, (2) software modifications were added after the certification of the laser, and (3) these modifications did change the way the laser scanner measured back-face deformations. DOD did not explicitly state whether it concurred with our recommendation for a side- by-side comparison of the laser scanner and the digital caliper in their operational environment. We assert that such a study is important because without it the Army and DOD do not know the effect the laser scanner may have on the back-face deformation standard that has been used for many years and was established with the intention of being measured with a digital caliper. If the comparison reveals a significant difference between the laser scanner and the digital caliper, DOD and the Army may need to revisit the back-face deformation standard of its requirements with the input of industry experts and the medical community. DOD generally concurred with our fifth recommendation to conduct an independent evaluation of the Aberdeen Test Center’s testing protocols, facilities, and instrumentation and stated that such an evaluation would be performed by a team of subject matter experts that included both DOD and non-DOD members. We agree that in principal this approach meets the intent of our recommendation as long as the DOD members of the evaluation team are independent and not made up of personnel from those organizations involved in the body armor testing such as office of the Director of Operational Test and Evaluation, the Army Test and Evaluation Command, or PEO Soldier. DOD’s comments and our specific responses to them are provided in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review of body armor testing focused on testing conducted by the Army in response to specific concerns raised by the House and Senate Armed Services Committees and multiple members of Congress. During our review, we were present during two rounds of testing of body armor designs that were submitted in response to a May 2007-February 2008 Army contract solicitation. The first round of testing, called Preliminary Design Model testing, was conducted from February 2008 through June 2008 with the objective of determining whether designs submitted under the contract solicitation met the required ballistic performance specifications and were eligible for contract award. The second round of testing, called First Article Testing, was conducted between November 2008 and December 2008 on the body armor designs that passed the Preliminary Design Model testing. Both tests were conducted at Aberdeen Proving Grounds in Aberdeen, Md., and were performed by Aberdeen Test Center. During the course of our review, we observed how the Army conducted its body armor testing and compared our observations with the established body armor testing protocols. We did not verify the accuracy of the Army’s test data and did not provide an expert evaluation of the results of testing. To understand the practices the Army used and the established testing protocols we were comparing the practices with, we met with and/or obtained data from officials from the Department of Defense (DOD) organizations and the industry experts listed in table 1: To determine the degree to which the Army followed established testing protocols during the Preliminary Design Model testing of body armor designs, we were present and made observations during the entire period of testing, compared our observations with established testing protocols, and interviewed numerous DOD and other experts about body armor testing. We observed Army testers as they determined whether designs met the physical and ballistics specifications described in the contract solicitation, and as encouraged by Aberdeen Test Center officials, we observed the ballistics testing from inside a viewing room equipped with video and audio connections to the firing lanes. We also were present and observed the physical characterization of the test items and visited the environmental conditioning chambers, the weathering chamber, and the X- ray facility. We were at Aberdeen Test Center when the designs were delivered for testing on February 7, 2008, and were on-site every day of physical characterization, which comprises the steps performed to determine whether each design meets the required weight and measurement specifications. We systematically recorded our observations of physical characterization on a structured, paper data-collection instrument that we developed after consulting with technical experts from Program Executive Office (PEO) Soldier before testing started. We were also present for every day except one of the ballistics testing, observing and collecting data on approximately 80 percent of the tests from a video viewing room that was equipped with an audio connection to each of the three firing lanes. To gather data from the day that we were not present to observe ballistic testing, we viewed that day’s testing on video playback. We systematically recorded our observations of ballistics testing using a structured, electronic data-collection instrument that we developed to record relevant ballistic test data—such as the shot velocity, penetration results, and the amount of force absorbed (called “back-face deformation”) by the design tested. Following testing, we supplemented the information we recorded on our data collection instrument with some of the Army’s official test data and photos from its Vision Digital Library System. We developed the data collection instrument used to collect ballistics testing data by consulting with technical experts from Program Executive Office Soldier and attending a testing demonstration at Aberdeen Test Center before Preliminary Design Model testing began. After capturing the Preliminary Design Model testing data in our data collection instruments, we compared our observations of the way the Aberdeen Test Center conducted testing with the testing protocols that Army officials told us served as the testing standards at the Aberdeen Test Center. According to these officials, these testing protocols comprised the (1) test procedures described in the contract solicitation announcement’s purchase descriptions and (2) Army’s detailed test plans and Test Operating Procedure that serve as guidance to the Aberdeen Test Center testers and that were developed by the Army Test and Evaluation Command and approved by Program Executive Office Soldier, the office of the Director of Operational Test and Evaluation, the Army Research Labs, and cognizant Army components. We also reviewed National Institute of Justice testing standards because Aberdeen Test Center officials told us that, although Aberdeen Test Center is not a National Institute of Justice-certified testing facility, they have made adjustments to their procedures based on those standards and consider them when evaluating Aberdeen Test Center’s test practices. Regarding the edge shot locations for the impact test samples, we first measured the area of intended impact on an undisturbed portion of the test item on all 56 test samples after the samples had already been shot. The next day we had Aberdeen Test Center testers measure the area of intended impact on a random sample of the impact test samples to confirm our measurements. Throughout testing we maintained a written observation log and compiled all of our ballistic test data into a master spreadsheet. Before, during, and after testing, we interviewed representatives from numerous Army entities—including the Assistant Secretary of the Army for Acquisition, Technology and Logistics; Aberdeen Test Center; Developmental Test Command; Army Research Laboratories; and Program Executive Office Soldier—and also attended Integrated Product Team meetings. To determine the degree to which the Army followed established testing protocols during First Article Testing of the body armor designs that passed Preliminary Design Model testing, we were present and made observations during the entire period of testing, compared our observations with established testing protocols, and interviewed numerous DOD and industry experts about body armor testing. As during Preliminary Design Model testing, we observed Army testers as they determined whether designs met the physical and ballistics specifications described in the contract solicitation. However, different from our review of Preliminary Design Model testing, we had access to the firing lanes during ballistic testing. We also still had access to the video viewing room used during Preliminary Design Model testing, so we used a bifurcated approach of observing testing from both the firing lanes and the video viewing room. We were present for every day except one of First Article Testing—from the first day of ballistics testing on November 11, 2008, until the final shot was fired on December 17, 2008. We noted the weights and measures of plates during physical characterization on the same data collection instrument that we used during Preliminary Design Model testing. For the ballistics tests, we revised our Preliminary Design Model testing data collection instrument so that we could capture data while in the firing lane—data that we were unable to confirm first hand during Preliminary Design Model testing. For example, we observed the pre-shot measurements of shot locations on the plates and the Aberdeen Test Center’s method for recording data and tracking the chain of custody of the plates; we also recorded the depth of the clay calibration drops (the series of pre-test drops of a weight on clay that is to be placed behind the plates during the shots), the temperature of the clay, the temperature and humidity of the firing lane, the temperatures in the fluid soak conditioning trailer, and the time it took to perform tests. We continued to record all of the relevant data that we had recorded during Preliminary Design Model testing, such as the plate number, type of ballistic subtest, the charge weight of the shot, the shot velocity, the penetration results, and the back- face deformation. Regarding the new laser arm that Aberdeen Test Center acquired to measure back-face deformation during First Article Testing, we attended a demonstration of the arm’s functionality performed by Aberdeen Test Center and also acquired documents related to the laser arm’s certification by Army Test, Measurement, and Diagnostic Equipment activity. With a GAO senior methodologist and a senior technologist, we made observations related to Aberdeen Test Center’s methods of handling and repairing clay, calibrating the laser guide used to ensure accurate shots, and measuring back-face deformation. Throughout testing we maintained a written observation log and compiled all of our ballistic test data into a master spreadsheet. Following testing, we supplemented the information we recorded on our data collection instrument with some of the Army’s official test data and photos from its Vision Digital Library System to complete our records of the testing. After capturing the testing data in our data collection instruments, we compared our observations of the way Aberdeen Test Center conducted testing with the testing protocols that Army officials told us served as the testing standards at the Aberdeen Test Center. In analyzing the potential impact of independent variables on testing, such as the potential impact of the November 13th rain on the clay, we conducted statistical tests including chi-square and Fisher’s Exact Test methods to accommodate small sample sizes. Before, during, and after testing, we interviewed representatives from numerous Army agencies, including Aberdeen Test Center, Developmental Test Command, Army Research Laboratories, and Program Executive Office Soldier. We also spoke with vendor representatives who were present and observing the First Article Testing, as well as with Army and industry subject matter experts. We conducted this performance audit from July 2007 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. The Department of Defense (DOD) stated that undertakings of this magnitude are not without flaws and that what was most important was fielding body armor plates that defeated the threat. While DOD may have identified some flaws that may not be serious enough to call the testing results into question, several of the deviations to the testing protocols that we observed do call the testing results into question for the reasons stated in our report. An independent expert has not evaluated the impact of these deviations on the test results and, until such a study is conducted, DOD cannot be assured that the plates that passed testing can defeat the threat. DOD also noted several actions DOD and the Army have taken to improve procedures associated with body armor testing. Our responses to these actions are included in comments 2 through 6. 2. The office of the Director of Operational Test and Evaluation’s efforts to respond to members of the Armed Services Committees and to address issues raised by the Department of Defense Inspector General were outside the scope of our audit. Therefore, we did not validate the implementation of the actions DOD cited or evaluate their effectiveness in improving test procedures. With regard to the office of the Director of Operational Test and Evaluation’s establishing a policy to conduct First Article Testing at government facilities, using a government facility to conduct testing may not necessarily produce improved test results. 3. Regarding the office of the Director of Operational Test and Evaluation’s oversight of testing, the office of the Director of Operational Test and Evaluation led the Integrated Product Team and approved the test plans. However, while we were present at the Aberdeen Test Center during Preliminary Design Model testing and First Article Testing, we did not observe on-site monitoring of the testing by the office of the Director of Operational Test and Evaluation staff beyond incidental visits during VIP events and other demonstrations. 4. Regarding the procedures and policies DOD stated were implemented by the Army Test and Evaluation Command to improve testing: Only two of the test ranges were completed prior to Preliminary Design Model testing. Two additional test ranges were completed after Preliminary Design Model testing. Regarding the certification of the laser scanner measurement device, as noted in our report, the Army had not adequately certified that it was an appropriate tool for body armor testing (see our comment 12). The Army’s Test Operating Procedure was not completed or implemented until after Preliminary Design Model testing. New clay conditioning chambers inside each test range were not constructed until after all testing was completed (see our comment 13). The improved velocity measurement accuracy study was not conducted until after all testing was completed. Regarding the implementation of electronic data collection and processing for body armor testing, as stated in our report, we observed that not all data are electronically collected. Many types of data are manually collected and are later converted to electronic data storage. 5. Regarding Program Executive Office (PEO) Soldier’s efforts to improve the acquisition of personal protection equipment: The contract solicitation allowed all prospective body armor manufacturers to compete for new contracts. We observed that PEO Soldier did transfer expertise and experience to support Army Acquisition Executive direction that all First Article Testing and lot-acceptance testing be conducted by the Army Test and Evaluation Command. The task force that focused on soldier protection was not initiated until February 2009, after all Preliminary Design Model testing and First Article Testing was completed. According to Army officials, PEO Soldier instituted a non-destructive test capability that became operational after Preliminary Design Model testing, but prior to First Article Testing. PEO Soldier’s personal protection evaluation process was described in our previous report—GAO-07-662R. Although we recognized the strength of PEO Soldier’s personal protection evaluation process in our earlier report, not all the protections that were in place at that time remain in place. For example, the requirement that testing be conducted at a National Institute of Justice (NIJ)-certified facility was waived. 6. DOD stated that many of the actions by Army Test and Evaluation Command and PEO Soldier were initiated and improved upon during the course of our review. However, as discussed above, several of these actions were initiated before and during testing, but many of them were not completed until after testing was completed. 7. DOD and the Army stated that Preliminary Design Model testing had achieved its objective to identify those vendor designs that met the performance objectives stated in PEO Soldier’s purchase description and that “it is incorrect to state that ‘at least two’ of the preliminary design models should have failed as they passed in accordance with the modified solicitation.” We disagree with these statements. As stated in our report, the most consequential of the deviations from testing protocols we observed involved the measurement of back-face deformation, which did affect final test results. According to original testing protocols, back-face deformation was to be measured at the deepest point of the depression in the clay backing. This measure indicates the most force that the armor will allow to be exerted on an individual struck by a bullet. According to Army officials, the deeper the back-face deformation measured in the clay backing, the higher the risk of internal injury or death. DOD and the Army now claim that these solutions passed in accordance with the modified solicitation, which overlooks the fact that the reason the solicitation had to be modified was that Army testers deviated from the testing protocols laid out in the purchase descriptions and did not measure back-face deformation at the deepest point. DOD and the Army also stated in their response that they decided to use the point of aim because they determined it was an accurate and repeatable process. Yet in DOD’s detailed comments regarding edge shot locations, DOD acknowledged that there were “potential variances between the actual aim point and impact point during testing.” Army Research Laboratory and NIJ-certified laboratories use the benchmark process of measuring back-face deformation at the deepest point, not at the point of aim. As set forth in our report, at least two solutions passed Preliminary Design Model testing that would have failed if back-face deformation had been measured to the deepest point. This statement came directly from Aberdeen Test Center officials during a meeting in July 2008, where they specifically told us which two solutions would have failed. We said “at least” two because Army testers did not record deepest point back-face deformation data for the first 30 percent of testing, and therefore there could be more solutions that would have failed had the deepest point been measured during this first portion of the test. Because the Army did not measure back-face deformation to the deepest point, it could not identify whether these two solutions in particular and all the solutions in general met performance requirements. As a result, Army could not waive First Article Testing for successful candidates and was forced to repeat the test to ensure that all solutions did indeed meet requirements. By repeating testing, the Army incurred additional expense and further delays in fielding armor from this solicitation to the soldiers. During the course of our audit, the Army also acknowledged that the Preliminary Design Model testing did not meet its objective because First Article Testing could not be waived without incurring risk to the soldiers. DOD and the Army stated that, upon discovery of the back-face deformation deviation from the testing protocols described in the purchase descriptions, the Army stopped testing. The Army’s Contracting Office was informed of this deviation through a series of questions posed by a vendor who was present at the Vendor Demonstration Day on February 20, 2008. This vendor sent questions to the Contracting Office on February 27 asking whether testers were measuring at the aim point or at the deepest point. This vendor also raised questions about how damage to the soft pack would be recorded and about the location of edge shots. Based on our observations, all of these questions involved issues where Army testers deviated from testing protocols and are discussed in our responses to subsequent comments. The Army did not respond until March 19 and replied that its test procedures complied with solicitation requirements. It was not until Army leadership learned of the vendor’s questions and of the deviation in measuring back-face deformation that testing was finally halted on March 27, a full month after the issue came to the Army Test and Evaluation Command’s attention. 8. DOD stated that in 2007, prior to the initiation of Preliminary Design Model testing, the Army Test and Evaluation Command, the office of the Director of Operational Test and Evaluation, and Army leadership all agreed that First Article Testing would be conducted as part of the Army’s body armor testing. However, DOD did not provide any documentation dated prior to April 2008—that is, prior to the discovery of the back-face deformation deviation—that suggested that DOD intended to conduct First Article Testing following Preliminary Design Model testing. In July 2008, the Army Test and Evaluation Command and PEO Soldier stated in official written responses to our questions regarding Preliminary Design Model testing that the conduct of First Article Testing became essential following Preliminary Design Model testing because of the Army’s measuring back- face deformation at the point of aim as opposed to at the deepest point of deformation. In fact, because of this deviation, DOD could not waive First Article Testing as originally planned and was forced to conduct subsequent tests to verify that the designs that had passed Preliminary Design Model testing met testing requirements. DOD asserted that a multi- phase concept including Preliminary Design Model testing, First Article Testing, and extended ballistic testing to support the development of an improved test standard was briefed to a congressional member and professional staff on November 14, 2007. We were present at this November 14 test overview and strategy/schedule briefing and noted that it did not include plans for First Article Testing to be performed in addition to Preliminary Design Model testing. Excerpts from the slides briefed that day showed Preliminary Design Model (Phase 1) testing and a subsequent ballistic and suitability testing (Phase 2). As indicated in the slides (see fig. 7 and fig. 8) from that November 14 briefing, the Phase 2 test was designed to test the form, fit, and function of those solutions that had passed Preliminary Design Model testing as well as the ballistic statistical confidence tests. According to information we obtained, Phase 2 was never intended to be First Article Testing and was to have no impact on whether or not a solution received a contract. It was not until after the back-face deformation deviation was discovered that briefing slides and other documentation on test plans and schedules started describing First Article Testing as following Preliminary Design Model testing. For example, as stated by DOD in its comments, the October 2008 briefing to a congressional member and professional staff clearly showed First Article Testing as following Preliminary Design Model testing (Phase 1) and preceding Phase 2. Therefore, it is not clear why DOD’s test plan briefings would make no mention of a First Article Testing prior to the back-face deformation measurement deviation while including First Article Testing in subsequent briefings if the plan had always been to conduct both Preliminary Design Model testing and First Article Testing. Furthermore, it is not clear why DOD would intentionally plan at the start of testing to repeat Preliminary Design Model testing (which was supposed to be performed in accordance with the First Article Testing protocol) with an identical test (First Article Testing) given that it has been the Army’s practice to use such Preliminary Design Model testing to meet First Article Testing requirements – a practice that was also supported by the DOD Inspector General and the Army Acquisition Executive after an audit of the Army’s body armor testing program. DOD also stated that First Article Testing waivers were not permitted under the body armor solicitation. However, the solicitation and its amendments are unclear as to whether waivers of First Article Testing would be permitted. Nonetheless, in written answers to questions we posed to the Army in July 2008, the Army Test and Evaluation Command and PEO Soldier in a combined response stated that due to the fact that back-face deformation was not measured to the deepest point of penetration during Phase I tests, there would be no waivers of First Article Testing after the contract award. DOD also stated that it and the Army concluded that First Article Testing had achieved its objective of verifying that contracted vendors could produce, in a full-rate capacity, plates that had passed Preliminary Design Model testing. DOD further stated that it is incorrect to say that First Article Testing did not meet its objective and it is incorrect to assert that three of five vendor designs should have failed First Article Testing. However, our analysis showed that two solutions that passed First Article Testing would have failed if back-face deformations had not been rounded and had been scored as they were during Preliminary Design Model testing. The third solution that passed would have failed if Army testers had correctly scored a shot result as a complete penetration in accordance with the definition of a complete penetration in the purchase description, rather than as a partial penetration. Because questions surround these scoring methods and because DOD and the Army cannot confidently identify whether these vendors can mass produce acceptable plates, we restate that First Article Testing may not have achieved its objective. See comments 12, 10, and 11 regarding DOD’s statements about the certification of the laser scanning equipment, the rounding of back-face deformations, and the Aberdeen Test Center’s scoring procedures, respectively. We agree with DOD that an open dialog with the DOD Inspector General, external test and technology experts, and us will improve the current body armor testing. However, we disagree with DOD’s statement that NIJ- certified laboratories lack expertise to provide reliable information on body armor testing issues. Before the current solicitation, the Army relied on these NIJ-certified laboratories for all body armor source selection and lot acceptance tests. The Marine Corps also conducts source selection tests at these facilities. As these independent laboratories have performed numerous tests for the Army conducted in accordance with First Article Testing protocol, we assert that the credentials of these laboratories warrant consideration of their opinions on body armor testing matters. 9. DOD did not concur with our recommendation for an independent evaluation of First Article Testing results before any armor is fielded to soldiers because the First Article Testing achieved its objectives. We disagree with DOD’s position that First Article Testing and Preliminary Design Model testing achieved their objectives because we found numerous deviations from testing protocols that allowed solutions to pass testing that otherwise would have failed. Due to these deviations, the majority of which seem to make the testing easier to pass and favor the vendors, we continue to believe that it is necessary to have an independent external expert review the results of First Article Testing and the overall effect of DOD’s deviations on those results before the plates are fielded. An independent observer, external to DOD, is best suited to determine the overall impact of DOD’s many deviations during the testing associated with this solicitation. Consequently, we have added a matter for Congress to consider directing DOD to either conduct this external review or direct that DOD officially amend its testing protocols to reflect any revised test procedures and repeat First Article Testing. 10. DOD did not concur with our recommendation that the practice of rounding down back-face deformations should be reviewed by external experts because the practice has been used historically by NIJ-certified laboratories. Although DOD acknowledged that the practice of rounding is not adequately described in the testing protocols, it stated that rounding is permitted under American Society for Testing and Materials (ASTM) E-29. The purchase descriptions (attachments 01 and 02 of the solicitation) referenced five ASTM documents, but ASTM E-29 is not referenced and therefore is not part of the protocol. The detailed test plans state that solutions shall incur a penalty on deformations greater than 43 millimeters, and the Army is correct that neither the purchase description nor the detailed test plans provide for rounding. During Preliminary Design Model testing, Army testers measured back-face deformations to the hundredths place and did not round. Any deformation between 43.00 and 43.50 received a penalty. During First Article Testing, deformations in this range were rounded down and did not incur a penalty, so the decision to round effectively changed the standard in favor of the vendors. Two solutions passed First Article Testing that would have failed if back-face deformations had been scored without rounding as they were during Preliminary Design Model testing. We recognize that there are other factors, such as the fact that the new laser scanner may overstate back- face deformations that might justify the decision to round down back-face deformations. However, as a stand-alone event, rounding down deformations did change the standard in the middle of the solicitation between Preliminary Design Model testing and First Article Testing. That is why it is important for an independent external expert to review the totality of the test and the Army’s deviations from testing protocols to determine the actual effect of this and other deviations. 11. Regarding the incorrect scoring of a complete penetration as a partial penetration, DOD stated that the first layer of soft armor behind the plate serves as a witness plate during testing. If that first layer of soft armor is not penetrated, as determined by the breaking of threads on that first layer of soft armor, the test shot is not scored as a complete penetration in accordance with the PEO Soldier’s scoring criteria. However, DOD’s position is not consistent with the established testing protocols as evidenced by the following: (1) We did not observe the use of and the testing protocols do not require the use of a witness plate during testing to determine if a penetration occurred; and (2) The testing protocols do not state that “the breaking of threads” is the criterion for determining a penetration. The language of the testing protocols, not undocumented criteria, should be used in scoring and determining penetration results. The criteria for scoring a penetration are found in the current solicitation’s protocols. Paragraph 6.6, of each of the purchase descriptions state, under “Definitions: Complete Penetration (CP) for Acceptance Testing-- Complete penetrations have occurred when the projectile, fragment of the projectile, or fragment of the armor material is imbedded or passes into the soft under garment used behind the protective inserts plates” (ESAPIs or XSAPIs). Our multiple observations and thorough inspection of the soft armor in question revealed that black-grayish particles had penetrated at least three Kevlar layers as evidenced by their frayed, fuzz-like and separated appearance to the naked eye. The black-grayish particles were stopped by the fourth Kevlar layer. DOD acknowledged that figure 6 of our report appears to show evidence of a perforation on the rear of the test plate in question and that the Aberdeen Test Center’s subject matter expert found dust particles. These particles are fragments of the projectile or fragments of the armor material that were imbedded and indeed passed into the soft undergarment used behind the protective insert; therefore, the shot should have been ruled a complete penetration according to the testing protocols, increasing the point penalties and causing the design to fail First Article Testing. DOD’s comments stated that we acknowledged there were no broken threads on the first layer of the soft armor. We made no such comment and this consideration is not relevant as the requirement for broken fibers is not consistent with the written testing protocols as we have stated. Of consequence, DOD and Army officials acknowledged that the requirement for broken fibers was not described in the testing protocols or otherwise documented. In addition to the DOD acknowledgement that an Aberdeen Test Center subject matter expert found particles on the soft body armor, more convincing evidence is the picture of the subject plate. Figure 6 of our report clearly shows the tear in the fibers that were placed behind the plate in question allowing the penetration of the particles found by the Aberdeen Test Center subject matter expert. These particles can only be fragments of the projectile or fragments of the armor material that passed into the soft under garment used behind the protective inserts (plates), confirming our observations of the event and the subsequent incorrect scoring. The shot should have been scored a complete penetration, and the penalty incurred would have caused the design in question to fail First Article Testing. 12. DOD did not concur with our recommendation that the use of the laser scanner needs to be reviewed by experts external to DOD due to the lack of a full evaluation of the scanner’s accuracy to measure back-face deformations, to include an evaluation of the software modifications and operation under actual test conditions. DOD asserted that the laser scanner measurement device provides a superior tool for providing accurate, repeatable, defensible back-face deformation measurements to the deepest point of depression in the clay. We agree that once it is properly certified, tested, and evaluated, the laser may eliminate human errors such as incorrectly selecting the location of the deepest point or piercing the clay with the sharp edge of the caliper and making the depression deeper. However, as we stated, the Army used the laser scanner as a new method to measure back-face deformation without adequately certifying that the scanner could function: (1) in its operational environment, (2) at the required accuracy, (3) in conjunction with its software upgrades, and (4) without overstating deformation measurements. DOD asserted that the software upgrades did not affect the measurement system of the laser scanner and that these software changes had no effect on the physical measurement process of the back-face deformation measurement that was validated through the certification process. The software upgrades were added after the certification and do include functions to purposely remove spikes and other small crevices on the clay and a smoothing algorithm that changed back-face deformation measurements. We have reviewed these software functions and they do in fact include calculations that change the back-face deformation measurement taken. Furthermore, Army officials told us that additional upgrades to the laser scanner were made after First Article Testing by Aberdeen Test Center to correct a software laser malfunction identified during the subsequent lot acceptance testing of its plates. According to these officials, this previously undetected error caused an overstatement of the back-face deformation measurement taken by several millimeters, calling into question all the measurements taken during First Article Testing. Also, vendors have told us that they have conducted several studies that show that the laser scanner overestimates back-face deformation measurements by about 2 millimeters as compared with measurements taken by digital caliper, thereby over-penalizing vendors’ designs and causing them to fail lot acceptance testing. Furthermore, the laser scanner was certified to an accuracy of 1.0 millimeters, but section 4.9.9.3 of the purchase descriptions requires a device capable of measuring to an accuracy of ±0.1 millimeters. Therefore, the laser does not meet this requirement making the certification invalid. The laser scanner is an unproven measuring device that may reflect a new requirement because the back-face deformation standards are based on measurements obtained with a digital caliper. This raises concerns that results obtained using the laser scanner may be more inconsistent than those obtained using the digital caliper. As we stated in the report, the Aberdeen Test Center has not conducted a side-by-side test of the new laser scanner used during First Article Testing and the digital caliper previously used during Preliminary Design Model testing. Given the discrepancies on back-face deformation measurements we observed and the overstating of the back- face deformation alleged by the vendors, the use of the laser is still called into question. Thus, we continue to support our recommendation that experts independent of DOD review the use of the laser during First Article Testing and that a full evaluation of the laser scanner is imperative to ensure that the tests are repeatable and can be relied upon to certify procurement of armor plates for our military personnel based on results of body armor testing at the Aberdeen Test Center using the laser scanner. Lastly, DOD stated that the laser scanner is used by the aeronautical industry; however, the Army Test and Evaluation Command officials told us that the scanner had to be customized for testing through various software additions and mounting customizations to mitigate vibrations and other environmental factors. These software additions and customizations change the operation of the scanner. 13. DOD does not concur with our recommendation that experts examine, among other items, “the exposure of clay backing material to rain and other outside environmental conditions as well as the effect of high oven temperatures during storage and conditioning,” because it believes that such conditions had no impact upon First Article Testing results. As detailed in the report, we observed these conditions at different points throughout the testing period. Major variations in materials preparation and testing conditions such as exposure to rain and/or violations of testing protocols merit consideration when analyzing the effectiveness and reliability of First Article Testing. As one specific example, we described in this report statistically significant differences between the rates of failure in response to one threat on November 13 and the failure rates on all other days of testing but do not use the statistical analysis as the definitive causal explanation for such failure. We observed one major environmental difference in testing conditions that day, the exposure of temperature-conditioned clay to heavy, cold rain in transit to the testing site. After experts confirmed that such variation might be one potential factor relating to overall failure rates on that day, we conducted statistical tests to assess whether failures rates were different on November 13 compared to other dates.. Our assertion that the exposure of the clay to rain may have had an impact on test results is based not solely on our statistical analysis of test results that day; rather, it is also based on our conversations with industry experts, including the clay manufacturer, and on the fact that we witnessed an unusually high number of clay calibration failures during testing that comprised plate designs of multiple vendors, not just the one design that DOD points to as the source for the high failure rate. We observed that the clay conditioning trailer was located approximately 25 feet away from the entrance to the firing lane. The clay blocks, weighing in excess of 200 lbs., were loaded face up onto a cart and then a single individual pulled the cart over approximately 25 feet of gravel to the firing lane entrance. Once there, entry was delayed because the cart had to be positioned just right to get through the firing lane door. Army testers performed all of this without covering the clay to protect it from the rain and the cold, and once inside the clay had significant amounts of water collected on it. With respect to the unusually high number of clay calibration failures on November 13, there were seven clay calibration drops that were not within specifications. Some of these failed clay boxes were discarded in accordance with the testing protocols; however, others were repaired, re- dropped, and used if they had passed the second drop series. These included one plate that was later ruled a no-test and three plates for which the first shot yielded a catastrophic back-face deformation. These were the only three first-shot catastrophic back-face deformations during the whole test, and they all occurred on the same rainy day and involved two different solutions, not just the one that DOD claims performed poorly. The failure rates of plates as a whole, across all plate designs, were very high this day, and the failures were of both the complete penetration and the back-face deformation variety. Water conducts heat approximately 25 times faster than air, which means the water on the surface cooled the clay considerably faster than the clay would have cooled by air exposure alone. Moreover, Army testers lowered the temperature of the clay conditioning trailers during testing on November 13 and told us that the reason was that the ovens and clay were too hot. This is consistent with what Army subject matter experts and other industry experts told us— that the theoretical effect of having cold rain collecting on hot clay may create a situation where the clay is more susceptible to both complete penetrations because of the colder, harder top layer and to excessive back-face deformations because of the overheated, softer clay beneath the top layer. Finally, the clay manufacturer told us that, although this is an oil-based clay, water can affect the bonding properties of the clay, making it more difficult for wet clay to stick together. This is consistent with what we observed on November 13. After the first shot on one plate, as Army testers were removing the plate from the clay in order to determine the shot result, we observed a large chunk of clay fall to the floor. This clay was simply swept off to the side by the testers. In another instance, as testers were repairing the clay after the calibration drop, one of the testers pulled a long blade over the surface of the clay to smooth it. When he hit the spot where one of the calibration drops had occurred and the clay had been repaired, the blade pulled up the entire divot and the testers had to repair the clay further. Regarding our use of no-test data, we were strict in the instances where we used this data, see our comment 24. DOD stated that it was the poor performance of one solution in particular that skewed the results for this day and that this solution failed 70 percent of its shots against Threat D during First Article Testing. DOD’s statistic is misleading. This solution failed 100 percent of its shots (6 of 6) on November 13, but only 50 percent for all other test days (7 of 14). Also, the fact that this solution managed to pass the Preliminary Design Model testing but performed so poorly during First Article Testing raises questions about the repeatability of DOD’s and the Army’s test practices. Finally, DOD’s own analysis confirms that two of the four solutions tested on November 13 performed at their worst level in the test on that day. If the one solution whose plate was questionably ruled a no-test on this day is included in the data, then three of the four solutions performed at their worst level in the test on this day. DOD said that after testing Aberdeen Test Center completed the planned installation of new clay conditioning chambers inside the test ranges precluding any external environmental conditioning interacting with the clay. We believe it is a step in the right direction that the Aberdeen Test Center has corrected this problem for future testing, but we continue to believe that an external entity needs to evaluate the impact of introducing this new independent variable on this day of First Article Testing. 14. DOD concurred that it should establish a written standard for conducting clay calibration drops but non-concurred that failed blocks were used during testing. DOD asserted that all clay backing material used during testing passed the calibration drop test prior to use. We disagree with this position because the calibration of the clay required by the testing protocols calls for “a series of drops,” meaning one series of three drops, not multiple series of three drops as we observed on various occasions. DOD stated that, as a result of our review and the concerns cited in our report, the Aberdeen Test Center established and documented a revised procedure stating that only one repeat of calibration attempt can be made and, if the clay does not pass calibration upon the second attempt, it is reconditioned for later use and a new block of clay is substituted for calibration. Based on the testing protocols, this is still an incorrect procedure to ensure the proper calibration of the clay prior to shooting. The testing protocols do not allow for a repeat series of calibration drops. DOD also says that, upon completion of testing under the current Army solicitation and in coordination with the National Institute of Standards and Technology, the office of the Director of Operational Test and Evaluation and the Army will review the procedures for clay calibration to include repeated calibration attempts and will document any appropriate procedural changes. DOD goes on to say that the NIJ standard as verified by personnel at the National Institute of Standards and Technology does not address specifically the issue of repeating clay calibration tests. However, the Aberdeen Test Center’s application of the Army’s current solicitation’s protocols during testing, and not the NIJ standards, was the subject of our review. In its comments, DOD acknowledged that the National Institute of Standards and Technology officials recommend only one series of drops for clay calibration, but the Aberdeen Test Center did multiple drops during testing. We are pleased that DOD has agreed to partner with the National Institute of Standards and Technology to conduct experiments to improve the testing community’s understanding of clay performance in ballistic testing, but these conversations and studies in our opinion should have occurred prior to testing, not after, as this deviation from testing protocols calls the tests results into question. We reassert that an external entity needs to evaluate the impact of this practice on First Article Testing results. 15. DOD partially concurred with our recommendation and agreed that inconsistencies were identified during testing; however, DOD asserted that the identified inconsistencies did not alter the test results. As stated in our response to DOD’s comments on our first recommendation, we do not agree. Our observations clearly show that (1) had the deepest point been used during Preliminary Design Model testing, two designs that passed would have failed and (2) had the Army not rounded First Article Testing results down, two designs that passed would have failed. Further, if the Army had scored the particles (which in their comments to this report DOD acknowledges were imbedded in the shoot pack behind the body armor) according to the testing protocols, a third design that passed First Article Testing would have failed. In all, four out of the five designs that passed Preliminary Design Model testing and First Article Testing would have failed if testing protocols had been followed. 16. DOD partially concurred with our recommendation that, based on the results of the independent expert review of the First Article Testing results, it should evaluate and recertify the accuracy of the laser scanner to the correct standard with all software modifications incorporated and include in this analysis a side-by-side comparison of the laser measurements of the actual back-face deformations with those taken by digital caliper to determine whether laser measurements can meet the standard of the testing protocols. DOD maintains that it performed an independent certification of the laser measurement system and process and that the software changes that occurred did not affect the measurement system in the laser scanner. However, as discussed in comment 12, we do not agree that an adequate, independent certification of the laser measurement system and process was conducted. Based on our observations, we continue to assert that the software changes added after certification did affect the measurement system in the laser. 17. DOD partially concurred with our recommendation for the Secretary of the Army to provide for an independent peer review of the Aberdeen Test Center’s body armor testing protocols, facilities, and instrumentation. We agree that a review conducted by a panel of external experts that also includes DOD members could satisfy our recommendation. However, to maintain the independence of this panel, the DOD members should not be composed of personnel from those organizations involved in the body armor testing (such as the office of the Director of Operational Test and Evaluation, the Army Test and Evaluation Command, or PEO Soldier. 18. DOD stated that Aberdeen Test Center had been extensively involved in body armor testing since the 1990s and has performed several tests of body armor plates. We acknowledge that Aberdeen Test Center had conducted limited body armor testing for the initial testing on the Interceptor Body Armor system in the 1990s and have clarified the report to reflect that. However, as acknowledged by DOD, Aberdeen Test Center did not perform any additional testing on that system for PEO Soldier since the 1990s and this lack of experience in conducting source selection testing for that system may have led to the misinterpretations of testing protocols and deviations noted on our report. According to a recent Army Audit Agency report, NIJ testing facilities conducted First Article Testing and lot acceptance testing for the Interceptor Body Armor system prior to this current solicitation. Another reason Aberdeen Test Center could not conduct source selection testing was that in the past Aberdeen Test Center lacked a capability for the production testing of personnel armor systems in a cost-effective manner; the test facilities were old and could not support test requirements for a temperature- and humidity-controlled environment and could not provide enough capacity to support a war- related workload. The Army has spent about $10 million over the last few years upgrading the existing facilities with state-of-the-art capability to support research and development and production qualification testing for body armor, according to the Army Audit Agency. Army Test and Evaluation Command notes that there were several other tests between 1997 and 2007, but according to Army officials these tests were customer tests not performed in accordance with a First Article Testing protocol. For example, the U.S. Special Operations Command test completed in May 2007 and cited by DOD was a customer test not in accordance with First Article Testing protocol. The Aberdeen Test Center built new lanes and hired and trained contractors to perform the Preliminary Design Model testing and First Article Testing. 19. DOD stated that, to date, it has obligated about $120 million for XSAPI and less than $2 million for ESAPI. However, the value of the 5-year indefinite delivery/indefinite quantity contracts we cited is based on the maximum amount of orders of ESAPI/XSAPI plates that can be purchased under these contracts. Given that the Army has fulfilled the minimum order requirements for this solicitation, the Army could decide to not purchase additional armor based on this solicitation and not incur almost $7.9 billion in costs. DOD stated in its response that there are only three contracts. However, the Army Contracting Office told us that there were four contracts awarded and provided those contracts to us for our review. Additionally, we witnessed four vendors participating in First Article Testing, all of which had to receive contracts to participate. It is unclear why the Army stated that there were only three contracts. 20. DOD is correct that there is no limit or range specified for the second shot location for the impact subtest. However, this only reinforces that the shot should have been aimed at 1.5 inches, not at 1.0 inch or at various points between 1.0 inch and 1.5 inches. It also does not explain why the Army continued to mark plates as though there were a range for this shot. Army testers would draw lines at approximately 0.75 inches for the inner tolerance and 1.25 inches for the outer tolerance of ESAPI plates. They drew lines at approximately 1.0 inch for the inner tolerance and 1.5 inches for the outer tolerance of XSAPI plates. We measured these lines for every impact test plate and also had Army testers measure some of these lines to confirm our measurements. We found that of 56 test items, 17 were marked with shot ranges wholly inside of 1.5 inches. The ranges of 30 other test items did include 1.5 inches somewhere in the range, but the center of the range (where Army testers aimed the shot) was still inside of 1.5 inches. Only four test items were marked with ranges centered on 1.5 inches. DOD may be incorrect in stating that shooting closer to the edge would have increased the risk of a failure for this subtest. For most subtests this may be the case, but according to Army subject matter experts the impact test is different. For the impact test, the plate is dropped onto a concrete surface, striking the crown (center) of the plate. The test is to determine if this weakens the structural integrity of the plate, which could involve various cracks spreading from the center of the plate outward. The reason the requirement for this shot on this subtest is written differently (i.e., to be shot at approximately 1.5 inches from the edge, as opposed to within a range between 0.75 inches and 1.25 inches or between 1.0 inches and 1.5 inches on other subtests) is that it is meant to test the impact’s effect on the plate. For this subtest and this shot, there may actually be a higher risk of failure the closer to the center the shot occurs. PEO Soldier representatives acknowledged that the purchase descriptions should have been written more clearly and changed the requirement for this shot to a range of between 1.5 inches and 2.25 inches during First Article Testing. We confirmed that Army testers correctly followed shot location testing protocols during First Article Testing by double-checking the measurements on the firing lane prior to the shooting of the plate. We also note that, although DOD stated the Preliminary Design Model testing shot locations for the impact test complied with the language of the testing protocols, under the revised protocol used during First Article Testing several of these Preliminary Design Model testing impact test shot locations would not have been valid. DOD stated that there was no impact on the outcome of the test, but DOD cannot say that definitively. Because shooting closer to the edge may have favored the vendors in this case, the impact could have been that a solution or solutions may have passed that should not have. 21. The Army stated that “V50 subtests for more robust threats…were executed to the standard protocols.” Our observations and analysis of the data show that this statement is incorrect. Sections 2.2.3.h(2) of the detailed test plans state: “If the first round fired yields a complete penetration, the propellant charge for the second round shall be equal to that of the actual velocity obtained on the first round minus a propellant decrement for 100 ft/s (30 m/s) velocity decrease in order to obtain a partial penetration. If the first round fired yields a partial penetration, the propellant charge for the second round shall be equal to that of the actual velocity obtained on the first round plus a propellant increment for a 50 ft/s (15 m/s) velocity increase in order to obtain a complete penetration. A propellant increment or decrement, as applicable, at 50 ft/s (15 m/s) from actual velocity of last shot shall be used until one partial and one complete penetration is obtained. After obtaining a partial and a complete penetration, the propellant increment or decrement for 50 ft/s (15 m/s) shall be used from the actual velocity of the previous shot.” V50 testing is conducted to discern the velocity at which 50 percent of the shots of a particular threat would penetrate each of the body armor designs. The testing protocols require that, after every shot that is defeated by the body armor, the velocity of the next shot be increased. Whenever a shot penetrates the armor, the velocity should be decreased for the next shot. This increasing and decreasing of the velocities is supposed to be repeated until testers determine the velocity at which 50 percent of the shots will penetrate. In cases in which the armor far exceeds the V50 requirement and is able to defeat the threat for the first six shots, the testing may be halted without discerning the V50 for the plate and the plate may be ruled as passing the requirements. During Preliminary Design Model V50 testing, Army testers would achieve three partial penetrations and then continue to shoot at approximately the same velocity, or lower, for shots 4, 5, and 6 in order to intentionally achieve six partial penetrations. Army testers told us that they did this to conserve plates. According to the testing protocols, Army testers should have continued to increase the charge weight in order to try to achieve a complete penetration and determine a V50 velocity. The effect of this methodology was that solutions were treated inconsistently. Army officials told us that this practice had no effect on which designs passed or failed, which we do not dispute in our report; however, this practice made it impossible to discern the true V50s for these designs based on the results of Preliminary Design Model testing. 22. DOD agreed that Army testers deviated from the testing protocols by measuring back-face deformation at the point of aim. DOD stated that this decision was made by Army leadership in consultation with the office of the Director of Operational Test and Evaluation, because this would not disadvantage any vendor. We agree with DOD that this decision was made by Army leadership in consultation with the office of the Director of Operational Test and Evaluation. We did not independently assess all factors being considered by Army leadership when they made the decision to overrule the Integrated Product Team and the Milestone Decision Authority’s initial decision to measure to the deepest point. DOD also stated that measuring back-face deformation at the point of aim is an accurate and repeatable process. As we pointed out in our previous responses, DOD’s own comments regarding DOD’s Assertion 3 contradict this statement where DOD writes that there were “potential variances between the actual aim point and impact point during testing.” Furthermore, we observed that the aim laser used by Army testers was routinely out of line with where the ballistic was penetrating the yaw card, despite continued adjustments to line up the aim laser with where the ballistic was actually traveling. DOD stated that it is not possible to know the reference point on a curved object when the deepest deformation point is laterally offset from the aim point. We disagree. DOD acknowledges in its response that PEO Soldier had an internally documented process to account for plate curvature when the deepest point of deformation was laterally offset from the point of aim. The use of correction factor tables is a well-known industry standard that has been in place for years, and this standard practice has been used by NIJ laboratories and is well-known by vendors. DOD and the Army presented several statistics on the difference between aim point back-face deformation and deepest point back-face deformation in testing and stated that the difference between the two is small. We do not agree with DOD’s assertion that a difference of 10.66 millimeters is small. In the case of Preliminary Design Model testing, the difference between measuring at the aim point and at the deepest point was that at least two solutions passed Preliminary Design Model testing that otherwise would have failed. These designs passed subsequent First Article Testing but have gone on to fail lot acceptance testing, raising additional questions regarding the repeatability of the Aberdeen Test Center’s testing practices. DOD asserts that the adoption of the laser scanner measurement technique resolves the problems the Army experienced in measuring back- face deformations completely. We would agree that the laser scanner has the potential to be a useful device but when used in the manner in which Aberdeen Test Center used it – without an adequate certification and without a thorough understanding of how the laser scanner might effectively change the standard for a solution to pass – we do not agree that it resolved back-face deformation measurement issues. Aberdeen Test Center officials told us that they did not know what the accuracy of the laser scanner was as it was used during First Article Testing. 23. DOD acknowledged the shortcoming we identified. DOD then asserted that once the deviation of measuring back-face deformation at the point of aim, rather than at the deepest point of depression was identified, those involved acted decisively to resolve the issue. We disagree based on the timeline of events described in our response to DOD’s comments on Preliminary Design Model testing, as well as on the following facts. We were present and observed the Integrated Product Team meeting on March 25 and observed that all members of the Integrated Product Team agreed to start measuring immediately at the deepest point, to score solutions based on this deepest point data, to conserve plates, and then at the end of the testing to make up the tests incorrectly performed during the first third of testing, as needed. We observed Army testers implement this plan the following day. Then, on March 27, Army leadership halted testing for 2 weeks, considered the issue, and then reversed the unanimous decision by the Integrated Product Team and decided to score to the point of aim. The deviation of scoring solutions based on the back-face deformation at the point of aim created a situation in which the Army could not have confidence in any solution that passed the Preliminary Design Model testing. Because of this, the Army had to repeat testing, in the form of First Article Testing, to determine whether the solutions that had passed Preliminary Design Model testing actually met requirements. 24. DOD did not concur with our finding that rain may have impacted the test results. DOD stated that such conditions had no impact upon First Article Testing results. Our statistical analysis of the test data shows failure rates to be significantly higher on November 13 than during other days of testing, and our observations taken during that day of testing and our conversations with industry experts familiar with the clay, including the clay manufacturer, suggest the exposure of the clay to the cold, heavy rain on that day may have been the cause of the high failure rates. Our analysis examined the 83 plates tested against the most potent threat, Threat D. The testing protocols required that two shots for the record be taken on each plate. We performed a separate analysis for the 83 first shots taken on these plates from the 83 second shots taken on the plates. These confirmed statistically that the rate of failure on November 13 was significantly higher than the rate of failure on other days. Further, of the 5 plates that experienced first-shot catastrophic failures during testing, 3 of them (60 percent) were tested on November 13 and all 3 of these were due to excessive back-face deformation. Given that only 9 plates were tested on November 13, while 74 were tested during all the other days of testing combined, it is remarkable that 60 percent of all catastrophic failures occurred on that one day of testing. DOD objected to our inclusion of no-test data in its calculation of first- and second-shot failure rates on November 13. We believe that the inclusion of no-test data is warranted because the Army’s exclusion of such plates was made on a post hoc basis after the shots were initially recorded as valid shots and because the rationale for determining the need for a re-test was not always clear. Additionally, we conducted an analysis excluding the no- test plates identified by DOD and that analysis again showed that the failure rate on November 13 was statistically higher than during the other days of testing, even after the exclusions. Excluding the no-test plates, 38 percent of first shots on November 13 (3 of 8) and 88 percent of second shots (7 of 8) failed. In its response, DOD reports that Aberdeen Test Center’s own statistical analysis of test data for Threat D reveals that the observed failure rate on November 13 is attributable to the “poor performance” of one design throughout testing. DOD asserts that its illustration indicates that “Design K was the weakest design on all days with no rain as well as days with rain.” DOD’s data do not support such a claim. As we have observed, excluding no-test plates, DOD’s data are based on 10 tests of two shots each for each of 8 designs (160 cases total). Each shot is treated as an independent trial, an assumption we find tenuous given that a plate’s structural integrity might be affected by the first shot. To account for date, DOD subdivides the data into cell sizes far too small to derive reliable statistical inferences about failure rates (between 2 and 6 shots per cell), as evidenced by the wide confidence intervals illustrated in DOD’s visual representation of its analysis. Among evidence DOD presented to support its claim that Design K was the weakest performing design on both November 13 and other days is failure rate data for four designs that were not tested on the day in question. For two of the three designs tested on November 13 there were only one or two plates tested on November 13, far too few to conduct reliable statistical tests on differences in design performance. For the other type of plate tested on that day (Design L), the three plates tested had a markedly higher failure rate (3 of 6 shots, or 50 percent) on that day than on other days (when it had, in 14 shots, 5 failures, or a 36 percent failure rate). Design K had a failure rate of 6 of 6 shots (100 percent) on the day in question, compared with 8 of 14 shots (57 percent) on other days. Overall, it is impossible to determine from such a small set of tests whether the lack of statistical significance between different designs’ failure rates on November 13 and other days results from small sample size or a substantive difference in performance. Overall, the Army Test and Evaluation Command’s design-based analysis cannot distinguish between the potential effects of date and design on failure rates because sufficient comparison data do not exist to conduct the kind of multivariate analysis that might resolve this issue. Because the data alone are inadequate for distinguishing between the potential effects of date and design, we continue to recommend that independent experts evaluate the potential effects of variations in materials preparation and testing conditions, including those occurring on November 13, on overall First Article Testing results. Additionally, DOD stated that the clay is largely impervious to water. However, as stated in our report, body armor testers from NIJ-certified private laboratories, Army officials experienced in the testing of body armor, body armor manufacturers, and the manufacturer of the clay used told us that getting water on the clay backing material could cause a chemical bonding change on the clay’s surface. DOD stated that one of its first actions when bringing in the clay is to scrape the top of the clay to level it. However, this only removes clay that is above the metal edge of the box. Clay that is already at or below the edge of the box is not removed by this scraping. We witnessed several instances in which the blade would remove clay at some points, but leave large portions of the clay surface untouched because the clay was below the edge of the box. 25. See comment 11. 26. The DOD is correct that the one particular example regarding deleting official test data only happened once. Fortunately, the results of the retest were the same as the initial test. After we noted this deficiency, Army officials told us that a new software program was being added that would prevent this from occurring again. DOD also stated that only two persons are authorized and able to modify the laser scanner software. We did not verify this statement; however, we assert that DOD needs to have an auditable trail when any such modifications are made and that it should require supervisory review and documentation or logging of these setting changes. 27. DOD acknowledged that the Army did not formally document significant procedure changes that deviated from established testing protocols or assess the impact of these deviations. 28. In our report we stated that the requirement to test at an NIJ-certified laboratory was withdrawn because the Aberdeen Test Center is not NIJ- certified. DOD’s comments on this point do not dispute our statement. Instead, DOD discussed NIJ certification and stated that it does not believe that NIJ certification is appropriate for its test facilities. However, we did not recommend that any DOD test facilities be NIJ-certified or even that NIJ be the outside organization to provide an independent review of the testing practices at Aberdeen Test Center that we did recommend. However, we believe NIJ certification would meet our recommendation for an independent review. Regarding DOD’s comments regarding NIJ certification, DOD asserted that NIJ certification is not appropriate for its test facilities and asserted that there are significant differences between NIJ and U.S. Army body armor test requirements. NIJ certification of a test laboratory and NIJ protocol for testing personal body armor primarily used by law enforcement officers are two distinct and different issues. Similar to a consumer United Laboratories laboratory certification, an NIJ laboratory certification includes an independent peer review of internal control procedures, management practices, and laboratory practices. This independent peer review is conducted to ensure that there are no conflicts of interest, and that the equipment utilized in the laboratory is safe and reliable. This peer review helps to ensure a reliable, repeatable, and accurate test, regardless of whether the test in question is following a U.S. Army testing protocol or a law enforcement testing protocol. NIJ-certified laboratories have consistently proven to be capable of following an Army testing protocol, which is demonstrated by the fact that NIJ-certified laboratories have conducted previous U.S. Army body armor source selection testing in accordance with First Article Testing protocol, as well as lot acceptance tests. The slide DOD included in its comments is not applicable here because it deals with the difference between testing protocols – the protocols for Army Interceptor Body Armor tests and the NIJ protocol for testing personal body armor primarily used by law enforcement officers. NIJ certification of a laboratory and NIJ certification of body armor for law enforcement purposes are two different things. 29. DOD stated that we were incorrect in asserting that the Army decided to rebuild small arms ballistics testing facilities at Aberdeen Test Center after the 2007 House Armed Services Committee hearing. Instead, DOD stated that the contract to construct additional test ranges at the Aberdeen Test Center Light Armor Range was awarded in September 2006 and that construction was already underway at the time of June 2007 hearing. DOD also stated that this upgrade was not in response to any particular event but was undertaken to meet projected future Army ballistic test requirements. Army officials we spoke with before testing for this solicitation told us that this construction was being completed in order to perform the testing we observed. As of July 2007, the Light Armor Range included two pre-WWII era ballistic lanes and four modern lanes partially completed. However, we noted that, as of July 2007, the lanes we visited were empty and that none of the testing equipment was installed; only the buildings were completed. In addition to the physical rebuilding of the test sites, the Amy also re-built its workforce to be able to conduct the testing. As stated on page 4 of DOD’s comments, PEO Soldier has instituted an effort to transfer testing expertise and experience from PEO Soldier to the Army Test and Evaluation Command. Prior to the start of testing we observed that Aberdeen Test Center hired, transferred in, and contracted for workers to conduct the testing. These workers were then trained by Aberdeen Test Center and conducted pilot tests in order to learn how to conduct body armor testing. We observed parts of this training, in person, and other parts via recorded video. In addition, we spoke with officials during this training and preparation process. From our observations and discussions with Army testers and PEO Soldier officials, we believe this process to have been a restarting of small arms ballistic testing capabilities at Aberdeen Test Center. Based on DOD’s comments, we clarified our report to reflect this information. In addition to the contact named above, key contributors to this report were Cary Russell, Assistant Director; Michael Aiken; Gary Bianchi; Beverly Breen; Paul Desaulniers; Alfonso Garcia; William Graveline; Mae Jones; Christopher Miller; Anna Maria Ortiz; Danny Owens; Madhav Panwar; Terry Richardson; Michael Shaughnessy; Doug Sloane; Matthew Spiers; Karen Thornton; and John Van Schaik. | The Army has issued soldiers in Iraq and Afghanistan personal body armor, comprising an outer protective vest and ceramic plate inserts. GAO observed Preliminary Design Model testing of new plate designs, which resulted in the Army's awarding contracts in September 2008 valued at a total of over $8 billion to vendors of the designs that passed that testing. Between November and December 2008, the Army conducted further testing, called First Article Testing, on these designs. GAO is reporting on the degree to which the Army followed its established testing protocols during these two tests. GAO did not provide an expert ballistics evaluation of the results of testing. GAO, using a structured, GAO-developed data collection instrument, observed both tests at the Army's Aberdeen Test Center, analyzed data, and interviewed agency and industry officials to evaluate observed deviations from testing protocols. However, independent ballistics testing expertise is needed to determine the full effect of these deviations. During Preliminary Design Model testing the Army took significant steps to run a controlled test and maintain consistency throughout the process, but the Army did not always follow established testing protocols and, as a result, did not achieve its intended test objective of determining as a basis for awarding contracts which designs met performance requirements. In the most consequential of the Army's deviations from testing protocols, the Army testers incorrectly measured the amount of force absorbed by the plate designs by measuring back-face deformation in the clay backing at the point of aim rather than at the deepest point of depression. Army testers recognized the error after completing about a third of the test and then changed the test plan to call for measuring at the point of aim and likewise issued a modification to the contract solicitation. At least two of the eight designs that passed Preliminary Design Model testing and were awarded contracts would have failed if measurements had been made to the deepest point of depression. The deviations from the testing protocols were the result of Aberdeen Test Center's incorrectly interpreting the testing protocols. In all these cases of deviations from the testing protocols, the Aberdeen Test Center's implemented procedures were not reviewed or approved by the Army and Department of Defense officials responsible for approving the testing protocols. After concerns were raised regarding the Preliminary Design Model testing, the decision was made not to field any of the plate designs awarded contracts until after First Article Testing was conducted. During First Article Testing, the Army addressed some of the problems identified during Preliminary Design Model testing, but GAO observed instances in which Army testers did not follow the established testing protocols and did not maintain internal controls over the integrity and reliability of data, raising questions as to whether the Army met its First Article Test objective of determining whether each of the contracted designs met performance requirements. The following are examples of deviations from testing protocols and other issues that GAO observed: (1) The clay backing placed behind the plates during ballistics testing was not always calibrated in accordance with testing protocols and was exposed to rain on one day, potentially impacting test results. (2) Testers improperly rounded down back-face deformation measurements, which is not authorized in the established testing protocols and which resulted in two designs passing First Article Testing that otherwise would have failed. Army officials said rounding is a common practice; however, one private test facility that rounds told GAO that they round up, not down. (3) Testers used a new instrument to measure back-face deformation without adequately certifying that the instrument could function correctly and in conformance with established testing protocols. The impact of this issue on test results is uncertain, but it could call into question the reliability and accuracy of the measurements. (4) Testers deviated from the established testing protocols in one instance by improperly scoring a complete penetration as a partial penetration. As a result, one design passed First Article Testing that would have otherwise failed. With respect to internal control issues, the Army did not consistently maintain adequate internal controls to ensure the integrity and reliability of test data. In one example, during ballistic testing, data were lost, and testing had to be repeated because an official accidentally pressed the delete button and software controls were not in place to protect the integrity of test data. Army officials acknowledged that before GAO's review they were unaware of the specific internal control problems we identified. |
Biomonitoring—one technique for assessing people’s exposure to chemicals—involves measuring the concentration of chemicals or their by- products in human specimens, such as blood or urine. Biomonitoring has been used to monitor certain workers’ lead exposure for many decades. More recently, advances in analytic methods have allowed scientists to measure more chemicals, in smaller concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used for a variety of applications, including public health research and measuring the impact of certain environmental regulations, such as the decline in blood lead levels following declining levels of gasoline lead. The CDC began collecting health statistics on the U.S. population through its National Health and Nutrition Examination Survey (NHANES) in 1971. This effort evolved over time to include the CDC collecting biomonitoring data in 1976, but only for a handful of chemicals, such as lead and certain pesticides. In 1999, the CDC substantially increased the number of chemicals in the biomonitoring component of the program to 116 and began analyzing and reporting these biomonitoring data in its versions of the National Report on Human Exposure to Environmental Chemicals. These three reports have provided a window into the U.S. population’s exposure to chemicals, and the CDC continues to develop new methods for collecting data on additional chemical exposures with each report. The NHANES design does not select or exclude participants on the basis of their potential for low or high exposure to a chemical. The current design of the biomonitoring program does not permit examination of exposure levels by locality, state, or region; seasons of the year; proximity to sources of exposure; or use of particular products. For example, it is not possible to extract a subset of the data and examine levels of blood lead that represent levels in a particular state’s population. Some specific uses of data from the CDC’s biomonitoring program are to determine which chemicals are present in individuals in the U.S. population, and at what concentrations; determine, for chemicals with a known toxicity level, the prevalence of people with levels above those toxicity levels; establish reference ranges that can be used by physicians and scientists to determine whether a person or group has an unusually high exposure; assess the effectiveness of public health efforts to reduce exposure of individuals to specific chemicals; determine whether exposure levels are higher among minorities, children, women of childbearing age, or other potentially vulnerable groups; track, over time, trends in levels of exposure of the population; and set priorities for research on human health effects. Some states have enacted local biomonitoring programs to identify and address health concerns. For example, Alaska is collecting women’s hair samples to test them for mercury and is supplementing those data with information on the women’s fish consumption and data on local fish mercury levels collected by the U.S. Fish and Wildlife Service. As another example, California is planning how to implement a statewide biomonitoring program and is currently selecting which chemicals to include in the program. As more data have become available regarding the general population’s exposure to a variety of commercial chemicals, public concerns have been aroused over the health risks posed by exposures to chemicals, such as flame retardants used in furniture or common pesticides used in and around the home. However, the utility and interpretation of biomonitoring data remain controversial, and the challenge for environment and health officials is to understand the health implications and to craft the appropriate policy responses. For decades, government regulators have used a process called “risk assessment” to understand the health implications of commercial chemicals. Researchers use this process to estimate how much harm, if any, can be expected from exposure to a given contaminant or mixture of contaminants, and to help regulators determine whether the risk is significant enough to require banning or regulating the chemical or other corrective action. The National Academy of Sciences—a private, nonprofit institution that provides science, technology, and health policy advice under a congressional charter—described the four stages of health risk assessment in 1983. The first stage is hazard identification, the determination of whether a particular chemical is or is not causally linked to particular health effects. The second stage is dose-response assessment, which involves determining the relationship between the magnitude of exposure to a contaminant and the probability and severity of adverse effects. These two stages generally involve studies that expose animals to high doses of a chemical and observe the adverse effects. The third stage is exposure assessment—that is, identifying the extent to which exposure is likely to occur. For this stage, risk assessors generally use data on chemical concentrations in the air, water, food, or other environmental media, combined with assumptions about how and at what rate the body is exposed to or absorbs the chemicals. Risk assessors also use assumptions about human behavior based on observational studies—such as the time spent outdoors or, for children, the amount of time spent on the floor—to better estimate an individual’s true exposure. The fourth stage of the health risk assessment process is risk characterization—that is, combining the information from the first three stages into a conclusion about the nature and magnitude of the risk, including attendant uncertainty. These assessments typically result in the creation of chemical-specific “reference values” that are based on an intake level or a concentration in an environmental medium. An example of such a reference value is a “reference dose,” which is an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime. A reference dose can be derived from a no observable adverse effect level (NOAEL), lowest observed adverse effect level, or benchmark dose, with uncertainty factors generally applied to reflect limitations of the data used. Uncertainty factors are used to account for interspecies extrapolation, and intraspecies variation, and, in some cases, to account for the duration of the study or a lack of a NOAEL. In addition, some legislation is based on the default assumption that children may be more sensitive to chemicals than adults. For example, the Food Quality Protection Act requires a 10-fold safety factor to protect children. Biomonitoring research is difficult to integrate into this risk assessment process, since estimates of human exposure to chemicals have historically been based on the concentration of these chemicals in environmental media and on information about how people are exposed. Biomonitoring data, however, provide a measure of internal dose that is the result of exposure to all environmental media and depend on how the human body processes and excretes the chemical. To integrate biomonitoring into traditional risk assessment, researchers must determine how to correlate this internal exposure with their prior understanding of how external exposure affects human health. Although the CDC has been the primary agency collecting biomonitoring data, EPA has specific authority to assess and manage chemical risks, often in coordination with other federal agencies. Several EPA offices are involved in collecting chemical data and assessing chemical risks. The Office of Pollution Prevention and Toxics (OPPT) manages programs under TSCA. The act provides EPA with the authority to collect information about chemical substances or, upon making certain determinations, to require companies to develop information and take action to control unreasonable risks by either preventing or limiting the introduction of dangerous chemicals into commerce or by placing restrictions on those already in the marketplace. TSCA also creates an Interagency Testing Committee to recommend chemicals for priority consideration for further testing to EPA. Furthermore, the EPA Administrator is specifically directed to coordinate with the Department of Health and Human Services and other federal agencies to conduct research, development, and monitoring as necessary to carry out the purposes of TSCA, and to establish and coordinate a system for exchange among federal, state, and local authorities of research and development results respecting toxic chemicals. The Office of Pesticide Programs (OPP) manages programs under the Federal Insecticide, Fungicide, and Rodenticide Act and the Federal Food, Drug, and Cosmetic Act, which require that EPA review pesticide risks to the environment before allowing a pesticide to be sold or distributed in the United States, and to set maximum pesticide residue levels allowed in or on food. Risk assessment activities at EPA are carried out by the agency’s Office of Research and Development (ORD)—its principal scientific and research arm—and its program and regional offices, including the Office of Air and Radiation, OPP, OPPT, and the Office of Water. ORD’s role is to provide program and regional office scientific advice and information for use in developing and implementing environmental policies, regulations, and practices. In fulfilling this role, ORD issues guidance documents for risk assessors, such as its Exposure Factors Handbook, and conducts and funds research aimed at addressing data gaps and reducing scientific uncertainty. This research is divided into two categories: core research and problem-driven research. Core research seeks to produce a fundamental understanding of the key biological, chemical, and physical processes that underlie environmental systems, thus forging basic scientific capabilities that can be applied to a wide range of environmental problems. Core research addresses questions common to many EPA programs and provides the methods and models needed to confront unforeseen environmental problems. Problem-driven research, however, focuses on regulatory, program office, or regional needs and may focus on specific pollutants or the development of models or methods to address specific questions. EPA makes limited use of current biomonitoring studies because such studies cover relatively few chemicals, and EPA rarely knows whether the measured amounts in people indicate a risk to human health. Nonetheless, EPA has taken action in a few cases, when biomonitoring studies showed that people were widely exposed to a chemical that appeared to pose health risks. The CDC’s biomonitoring program provides the most comprehensive biomonitoring data relevant to the U.S. population. The results of the program are summarized in three versions of the National Report on Human Exposure to Environmental Chemicals. The latest report, issued in 2005, covered 148 chemicals, and the forthcoming version in 2009 will provide data on about 250 chemicals. However, there are over 83,000 chemicals on the TSCA Chemical Substance Inventory. Of those chemicals, EPA focuses on screening and prioritizing the more than 6,200 chemicals that companies produce in quantities of more than 25,000 pounds per year at one site. About 3,000 of these 6,200 chemicals are produced at more than 1 million pounds per year in total. Current biomonitoring efforts also provide little information on children. Large-scale biomonitoring studies generally omit children because it is difficult to collect biomonitoring data from them. For example, some parents are concerned about the invasiveness of taking blood samples from their children, and certain other fluids, such as umbilical cord blood or breast milk, are available only in small quantities and only at certain times. When samples are available from children, they may not be large enough to analyze because the test requires more fluids than is available because of the reasons we have previously mentioned. In other cases, the sampling effort uses the sample for other purposes. For example, the CDC collects samples through its health and nutrition survey, but uses these samples to study biological indicators related to nutrition, such as the amount of water soluble or fat soluble vitamins, iron, or trace elements. Thus, the only biomonitoring analysis that the CDC has performed on samples from children under 6 are for cadmium, lead, mercury, cotinine— a by-product of tobacco smoke—and certain perfluorinated chemicals. Even if biomonitoring information is available for a chemical, it is often of limited use. EPA indicated that it often lacks the additional information needed to make biomonitoring results useful for risk assessment. Biomonitoring provides information only on the level of a chemical in a person’s body. The detectable presence of a chemical in a person’s blood or urine may not mean that the chemical causes disease. While exposure to larger amounts of a chemical may cause an adverse health impact, a smaller amount may be of no health consequence. In addition, biomonitoring data alone do not indicate the source, route, or timing of the exposure, making it difficult to identify the appropriate risk management strategies. As a result, EPA has made few changes to its chemical risk assessments or safeguards in response to the recent proliferation of biomonitoring data. For most chemicals, additional data on health effects; on the sources, routes, and timing of exposure; and on the fate of a chemical in the human body would be needed to incorporate biomonitoring into risk assessment. However, as we have discussed in prior reports, EPA will face difficulty in using its authorities under TSCA to require chemical companies to develop health and safety information on the chemicals they produce. We have designated the assessment and control of toxic chemicals as a “high-risk” area of government that requires broad-based transformation. EPA has used some biomonitoring data in chemical risk assessment and management, but only when additional studies have provided insight on the health implications of the biomonitoring data. For example, EPA used both biomonitoring and traditional risk assessment information to take action on certain perfluorinated chemicals. These chemicals are used in the manufacture of consumer and industrial products, including nonstick cookware coatings; waterproof clothing; and oil-, stain-, and grease- resistant surface treatments. In 1999, EPA began an investigation after receiving biomonitoring data from a chemical company indicating that perfluorooctanesulfonic acid (PFOS) was found in the general population. Further testing showed that PFOS also was persistent in the environment, was unexpectedly toxic, tended to accumulate in the human body, and was present in low concentrations in the blood of the general population and wildlife worldwide. The principal PFOS manufacturer voluntarily phased out its production in 2002, and EPA then required manufacturers and importers to notify EPA 90 days before manufacturing or importing PFOS and PFOS-related chemicals for certain new uses. In addition, in September 2002, EPA initiated a review of perfluorooctanoic acid (PFOA)—another perfluorinated chemical. The agency cited biomonitoring data indicating widespread human exposure in the United States, and animal toxicity studies that linked PFOA exposure to developmental effects on the liver and the immune system. EPA has sought to work with multiple parties to produce missing information on PFOA through the negotiation of enforceable consent agreements, memorandums of understanding, and voluntary commitments. In 2006, EPA also launched the a 2010/15 PFOA Stewardship Program, in which eight companies voluntarily committed to reduce facility emissions and product content of PFOA and related chemicals by 95 percent no later than 2010, and to work toward eliminating emissions and product content by 2015. EPA also used biomonitoring data in a few other cases. In the 1980s, EPA was considering whether to make permanent a temporary ban on lead in gasoline. National data on lead exposure showed a decline in average blood lead levels that corresponded to the declining amounts of lead in gasoline. On the basis of these data and other information, EPA strengthened its restrictions on lead. In the 1990s, EPA used biomonitoring studies to develop a reference dose for methylmercury, a neurotoxin. Mercury occurs naturally and in industrial pollution. In water, it can turn into methylmercury and then accumulate in fish. These studies showed that elevated levels of mercury in women’s hair and their infants’ umbilical cord blood correlated with adverse neurological effects when the children reached aged 6 or 7 years. In its fiscal year 2008 Performance and Accountability Report, EPA used results from biomonitoring studies to track its performance in reducing blood levels of lead, mercury, certain pesticides, and polychlorinated biphenyls. Furthermore, EPA used biomonitoring data in evaluating the safety of two pesticides: triclosan in 2008 and chlorpyrifos in 2006. Finally, EPA officials told us that the agency may adopt the use of biomonitoring data as a tool to evaluate the long- term outcomes of risk mitigation efforts. EPA has several biomonitoring research projects under way, but the agency has no system in place to track progress or assess the resources needed specifically for biomonitoring research. EPA also does not separately track spending or staff time devoted to biomonitoring research. Instead, it places individual biomonitoring research projects within its larger Human Health Research Strategy. While this strategy includes some goals relevant to biomonitoring, EPA has not systematically identified and prioritized the data gaps that prevent it from using biomonitoring data. Nor has it systematically identified the resources needed to reach biomonitoring research goals or identified which chemicals most need additional biomonitoring-related research. EPA intends to revise its Human Health Research Strategy for 2009, and it said that it may include a greater focus on how the agency can interpret biomonitoring data and use them in risk assessments. Also, EPA lacks a coordinated national strategy for the many agencies and other groups involved in biomonitoring research, which could impair its ability to address the significant data gaps in this field of research. In addition to the CDC and EPA, several other federal agencies have been involved in biomonitoring research, including the Agency for Toxic Substances and Disease Registry, the Occupational Safety and Health Administration, and entities within the National Institutes of Health (NIH). Several states have also initiated biomonitoring programs to examine state and local health concerns, such as arsenic in local water supplies or populations with high fish consumption that may increase mercury exposure. Furthermore, some chemical companies have for decades monitored their workforce for chemical exposure, and chemical industry associations have funded biomonitoring research. Finally, some environmental organizations have conducted biomonitoring studies of small groups of adults and children, including one study on infants. A national biomonitoring research plan could help better coordinate research and link data needs with collection efforts. EPA has suggested chemicals for future inclusion in the CDC’s National Biomonitoring Program, but has not gone any further toward formulating an overall strategy to address data gaps and ensure the progress of biomonitoring research. We have previously noted that to begin addressing the need for biomonitoring research, federal agencies will need to strategically coordinate their efforts and leverage their limited resources. Similarly, the National Academy of Sciences found that the lack of a coordinated research strategy allowed widespread exposures to go undetected, including exposures to PFOA and flame retardants known as polybrominated diphenyl ethers. The academy noted that a coordinated research strategy would require input from various agencies involved in biomonitoring and supporting disciplines. In addition to EPA, these agencies include the CDC, NIH, the Food and Drug Administration, and the U.S. Department of Agriculture. Such coordination could strengthen efforts to identify and possibly regulate the sources of the exposure detected by biomonitoring, since the most common sources—that is, food, environmental contamination, and consumer products—are under the jurisdiction of different agencies. EPA has taken some promising steps to address data gaps relevant to biomonitoring, which we discuss in the remaining paragraphs of this section. For example, EPA has funded research to address certain links between chemical exposure, biomonitoring measurements, and health effects. The agency worked with NIH to establish and fund several Centers for Children’s Environmental Health and Disease Prevention Research (Children’s Centers). One of these centers is conducting a large-scale study exploring the environmental and genetic causes of autism, and plans to use various types of biomonitoring data collected from parents and children to quantify chemical exposures and examine whether samples from children with autism contained different biomarkers than samples from children without autism. EPA’s Children’s Health Protection Advisory Committee stated that EPA’s Children’s Centers program represents an excellent investment that provides both short- and long-term benefits to children’s health. In addition, EPA also awards grants that are intended to advance the knowledge of children’s exposures to pesticides through the use of biomarkers, and of the potential adverse effects of these exposures. The grants issued went to projects that, among other things, investigated the development of less invasive biomarkers for common pesticides, related biomarkers to indices of early neurological development, and analyzed the association between pesticide levels in environmental samples and pesticide body burdens. According to EPA, this research has helped the agency to better assess children’s exposure to chemicals and assess the risk of certain pesticides. Furthermore, EPA pursues internal research to develop and analyze biomonitoring data. For example, EPA has studied the presence of the herbicide 2, 4-D in 135 homes with preschool-age children by analyzing soil, outdoor air, indoor air, carpet dust, food, urine, and samples taken from subjects’ hands. The study shed important light on how best to collect urine samples that reflect an external dose of the herbicide. It is also helping EPA researchers develop models that simulate how the body processes specific chemicals, which will help them understand the links between biomonitoring data and initial sources and routes of chemical exposure. In another area of research, EPA has partially implemented a National Academy of Sciences recommendation by collecting biomonitoring data during some animal toxicology studies. Collecting this information allows EPA to relate animal biomonitoring data to animal health effects, which is likely to be useful in interpreting human biomonitoring data. However, EPA does not routinely collect this information. Finally, EPA has collaborated with other agencies and industry on projects that may improve the agency’s ability to interpret and use biomonitoring data. For example, EPA collaborated with other federal agencies in the development of the National Children’s Study, a long-term study of environmental and genetic effects on children’s health, which is slated to begin collecting data later in 2009. The agency proposes to examine the effects of environmental influences on the health and development of approximately 100,000 children across the country, following them from before birth until age 21. Several researchers have noted that since the study is slated to collect biomonitoring samples and data on environmental exposures in the home while tracking children’s health status, the study would provide a unique opportunity to address data gaps and begin linking external exposure sources, biomonitoring measurements, and health outcomes. However, the study depends upon a sustained funding commitment, which it has not yet received, and the National Academy of Sciences has noted concerns regarding funding uncertainty. In a separate effort, EPA cosponsored a private consultant’s pilot project to create “biomonitoring equivalents” for four chemicals. These are biomonitoring measurements intended to have a well- understood relationship to existing measures of exposure, such as oral reference doses. This relatively new concept could help better interpret the biomonitoring results for these and other chemicals and could highlight when additional research and analysis are needed. EPA has other programs that it uses to gather additional chemical test data or to gather production and use information from companies, but these programs are not designed to interpret biomonitoring data. We discuss some of these programs in more detail in appendix II. EPA’s authorities under TSCA to obtain biomonitoring data are generally untested. While our analysis of the relevant TSCA provisions and of recent administrative action suggests that EPA may be able to craft a strategy for obtaining biomonitoring data under some provisions of TSCA, EPA has not determined the full extent of its authority or the full extent of chemical companies’ responsibilities with respect to biomonitoring. Several provisions of TSCA address data development and reporting. These relevant provisions are shown in table 1 and detailed in the text that follows. Under section 4 of TSCA, EPA can require chemical companies to test chemicals for their effects on health or the environment, but this process is difficult, expensive, and time-consuming. To require testing, EPA must determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment, and that testing is necessary to develop such data. The agency must also make one of two additional findings. The first is that a chemical may present an unreasonable risk of injury to human health or the environment. The second is that a chemical is or will be produced in substantial quantities, and that either (1) there is or may be significant or substantial human exposure to the chemical or (2) the chemical enters or may reasonably be anticipated to enter the environment in substantial quantities. EPA has said that it could theoretically require the development of biomonitoring data under section 4 of TSCA, but the agency’s authority to do so has not yet been tested. Generally, section 4 allows EPA, if it makes the necessary findings, to promulgate a “test rule” requiring a company to “develop data with respect to the health and environmental effects for which there is an insufficiency of data.” Biomonitoring data indicate only the presence of a chemical in a person’s body, and not its impact on the person’s health. However, EPA told us that biomonitoring data may in some cases demonstrate chemical characteristics—such as persistence, uptake, or fate—that could be relevant to the health and environmental effects of the chemical. Section 4 lists several chemical characteristics as items for which EPA can prescribe standards for development under a test rule, explicitly including persistence but also including any other characteristic that may present an unreasonable risk. Although biomonitoring may not be the only way to demonstrate persistence, uptake, or fate, section 4 also authorizes EPA to prescribe certain methodologies for conducting tests under a test rule, including but not limited to epidemiologic studies, serial or hierarchical tests, in vitro tests, and whole-animal tests. Biomonitoring is not a listed methodology, but EPA stated it could publish a standard test guideline for using biomonitoring as a methodology for obtaining data on health effects and chemical characteristics, or it could include biomonitoring in a section 4 test rule where warranted. Sections 5(a) and 5(b) of TSCA may be of limited use to EPA in obtaining biomonitoring data from chemical companies. Specifically, section 5(a) requires chemical companies to notify EPA at least 90 days before beginning to manufacture a new chemical or before manufacturing or processing a chemical for a use that EPA has determined by rule is a significant new use. The notice provided by the company must include “any test data in the possession or control of the person giving such notice which are related to the effect of any on health or the environment,” as well as “a description of any other data concerning the environmental and health effects of such substance, insofar as known to the person making the notice or insofar as reasonably ascertainable.” As we have previously described, EPA told us that data concerning “environmental and health effects” could include biomonitoring data. While a notice under section 5 may include test data required to be developed under a section 4 test rule, section 5(b) does not provide independent authority for EPA to require the development of any new data. Thus, section 5(b) can only be used by EPA to obtain data that the chemical companies have on hand. EPA has noted that companies are particularly unlikely to have biomonitoring data for new chemicals on hand because there is little opportunity for exposure to the chemical prior to full-scale manufacture. Under certain circumstances, EPA may be able to indirectly require the development of new test data using the leverage that it has under section 5(e) to limit the manufacture of chemicals, although the agency has never attempted to do so. Under section 5(e), when a company proposes to begin manufacturing a new chemical or to introduce an existing chemical for a significant new use, EPA may determine (1) that the available information is not sufficient to permit a reasoned evaluation of the health and environmental effects of that chemical and (2) that in the absence of such information, the manufacture of the chemical may meet certain risk or exposure thresholds. If the agency does so, the Administrator can issue a proposed order limiting or prohibiting the manufacture of the chemical. If a chemical company objects to such an order, the matter becomes one for the courts. If a court agrees with the Administrator, it will issue an injunction to the chemical company to limit or prohibit manufacture of the chemical. If and when the chemical company submits data to EPA sufficient for the Administrator to make a reasoned determination about the chemical’s health and environmental effects, which may include test data, the injunction can be dissolved. Thus, an injunction would provide an incentive for the chemical company to develop testing data. Also under this section, EPA sometimes issues a consent order that does not prohibit the manufacture of the chemical, but subjects it to certain conditions, including additional testing. EPA typically uses such consent orders to require testing of toxic effects and a chemical’s fate in the environment. While EPA may not be explicitly authorized to require the development of such test data under this section, chemical companies have an incentive to provide the requested test data to avoid a more sweeping ban on a chemical’s manufacture. EPA has not indicated whether it will use section 5(e) consent orders to require companies to submit biomonitoring data. “. . . any study of any effect of a chemical substance or mixture on health or the environment or on both, including underlying data and epidemiological studies, studies of occupational exposure to a chemical substance or mixture, toxicological, clinical, and ecological studies of a chemical, substance or mixture, and any test performed pursuant to this chapter.” While the agency has no formal position on whether biomonitoring data can be obtained under section 8(d), an EPA official stated that this provision authorizes the agency to promulgate a rule requiring a company to submit existing biomonitoring data. EPA explained that the presence of a chemical in blood or tissues of workers could indicate occupational exposure to the chemical, qualifying such information as reportable under this section. Section 8(e) has in recent years garnered more attention than any other section of TSCA as a potential means of collecting biomonitoring information, but this potential remains unclear. Section 8(e) requires chemical companies, on their own initiative, to report to EPA any information they have obtained that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. “Substantial risk” is currently defined by EPA in nonbinding guidance as “a risk of considerable concern because of (a) the seriousness of the effect, and (b) the fact or probability of its occurrence.” EPA asserts that biomonitoring data are reportable as demonstrating a substantial risk if the chemical in question is known to have serious toxic effects and the biomonitoring data indicate a level of exposure previously unknown to EPA. However, this is the extent of EPA’s current guidance on the subject. Industry has asked for expanded guidance covering specific criteria for when biomonitoring data are reportable, specific guidance on the reportability of occupational biomonitoring results versus biomonitoring results from the general population, and factors that would render biomonitoring data unreportable. EPA has not yet revised its guidance in response to industry request. This difficulty of enforcement is highlighted by the history leading up to an EPA action against the chemical company E. I. du Pont de Nemours and Company (DuPont). Until 2000, DuPont used the chemical PFOA to make Teflon® at a plant in West Virginia. In 1981, DuPont took blood samples of several female workers and two babies born to those workers. The levels of PFOA in the blood from the babies showed a measurable amount of PFOA crossed the placental barrier. DuPont moved its female employees away from work in areas of the plant where PFOA was used. However, after conducting additional animal testing, DuPont concluded that the exposure levels associated with workers posed no reproductive risks and moved the women back into these areas. DuPont did not report the human blood sampling results to EPA, even when EPA requested all toxicology data associated with PFOA. DuPont also did not report to EPA the results of blood testing of 12 people living near the plant, 11 of whom had never worked in the plant and had elevated levels of PFOA in their blood. EPA initially received the 1981 blood sampling information from counsel for a class action lawsuit by citizens living near the West Virginia facility. DuPont argued that none of the blood sampling information was reportable under TSCA because the mere presence of PFOA in workers’ and community members’ blood did not itself support the conclusion that exposure to PFOA posed any health risks. EPA subsequently filed two actions against DuPont for violating section 8(e) of TSCA by failing to report the biomonitoring data, among other claims. In December 2005, EPA and DuPont settled both of these actions. DuPont did not admit that it should have reported the biomonitoring data, but it agreed to a settlement totaling $16.5 million. Furthermore, EPA used the biomonitoring data it received in a subsequent risk assessment, which was reviewed by the Science Advisory Board, together with other information that was available at that time. Upon review, the board suggested that the PFOA cancer data are consistent with the category of “likely to be carcinogenic to humans” described in EPA Guidelines for Carcinogen Risk Assessment. As a result of this finding and other concerns associated with PFOA and PFOA-related chemicals, DuPont finally agreed to phase out the use of PFOA by 2015, in tandem with seven other companies. Thus, while EPA ultimately succeeded in using TSCA to remove PFOA from the market, it encountered great difficulty in doing so—that is, even when biomonitoring data, coupled with animal toxicity studies, arguably helped point out serious risks to human health associated with PFOA, DuPont’s position was that section 8(e) did not require it to submit the biomonitoring data it had collected on PFOA. DuPont did not provide the biomonitoring data on its own initiative, and EPA may never have received these data if they had not been originally provided by a third party. Without the biomonitoring information, EPA may never have completed the risk assessment that led to the phaseout of PFOA. Biomonitoring provides new insight into the general population’s exposure to chemicals. However, scientists have linked biomonitoring data with human health effects for only a handful of chemicals to date. As the volume of biomonitoring data continues to increase, EPA will need to strategically plan future research that links environmental contamination, biomonitoring measurements of exposure, and adverse health effects. The nation thus far has no long-term strategy to coordinate the biomonitoring research that EPA and other stakeholders perform. Nor does the agency gather reliable information on the amount of resources needed for addressing data gaps and incorporating biomonitoring research results into its chemical risk assessment and management programs. In addition, while federal agencies and other stakeholders could pursue various methods to address biomonitoring data gaps, such as routinely collecting biomonitoring in animal toxicology studies, coordination and agreements among EPA and the various other entities are needed to systematically pursue these options. A national biomonitoring research strategy could enhance the usefulness of biomonitoring data by identifying linkages between data needs and collection efforts and providing a framework for coordinating research efforts and leveraging stakeholder expertise. One of the first steps in interpreting biomonitoring data is to better understand how chemicals impact human health, including how we might be exposed to them and what levels of exposure pose a risk. However, information is sparse on how people are exposed to commercial chemicals and on the potential health risks for the general population. We have previously noted that EPA faces challenges in using TSCA to obtain the information needed to assess the risks of chemicals. These challenges also affect EPA’s ability to require that chemical companies provide biomonitoring data. Such data can provide additional insights on exposure levels and susceptible populations. However, EPA has not determined the extent of its authority to require a company to develop and submit biomonitoring data that may aid EPA in assessing chemicals’ risks, and EPA has not developed regulations or formal guidance concerning the conditions under which biomonitoring data might be required. While EPA has attempted to get additional information on chemical risks from voluntary programs, such programs have had mixed results and are unlikely to be a complete substitute for a more robust chemical regulatory program. To ensure that EPA effectively obtains the information needed to integrate biomonitoring into its chemical risk assessment and management programs, coordinates with other federal agencies, and leverages available resources for the creation and interpretation of biomonitoring research, we recommend that the EPA Administrator take the following two actions: Develop a comprehensive biomonitoring research strategy that includes the data EPA needs to incorporate biomonitoring information into chemical risk assessment and management activities, identifies federal partners and efforts that may address these needs, and quantifies the time frames and resources needed to implement the strategy. Such a strategy should identify and prioritize the chemicals for which biomonitoring data or research is needed, categorize existing biomonitoring data, identify limitations in existing data approaches, identify and prioritize data gaps, and estimate the time and resources needed to implement this strategy. Assess EPA’s authority to establish an interagency task force that would coordinate federal biomonitoring research efforts across agencies and leverage available resources, and establish such a task force if it determines that it has the authority. If EPA determines that further authority is necessary, it should request that the Executive Office of the President establish an interagency task force (or other mechanism as deemed appropriate) to coordinate such efforts. In addition, to ensure that EPA has sufficient information to assess chemical risks, the EPA Administrator should take the following action: Determine the extent of EPA’s legal authority to require companies to develop and submit biomonitoring data under TSCA. EPA should request additional authority from the Congress if it determines that such authority is necessary. If EPA determines that no further authority is necessary, it should develop formal written policies explaining the circumstances under which companies are required to submit biomonitoring data. We provided a draft of this report to the EPA Administrator for review and comment. EPA generally agreed with our first two recommendations, and did not disagree with the third, but it provided substantive comments on its implementation. We present EPA’s written comments in appendix III. EPA also provided technical comments, which we incorporated into the report as appropriate. The following paragraphs summarize EPA’s comments and our responses. While EPA agreed that it should develop a comprehensive biomonitoring research strategy, the agency noted that its research program is addressing important questions relevant to interpreting biomonitoring data. We agree that EPA is conducting important biomonitoring related research. However, as noted in our report, while EPA has biomonitoring research projects under way, it has no system in place to track overall progress or assess the resources needed specifically for biomonitoring research. EPA also agreed that an interagency task force is needed to coordinate federal biomonitoring research, and says that such a task force should be developed under the auspices of the Office of Science and Technology Policy. We do not disagree with this approach. EPA said that our report underemphasized the importance of considering assumptions about human behavior and the need to collect biomonitoring data for young children. We agree that EPA needs to consider human behavior and other factors that impact human health risk, and we note in the report that EPA uses assumptions about human behavior on the basis of observational studies—such as the time spent outdoors or, for children, the amount of time spent on the floor—to better estimate an individual’s true exposure. We also note that current biomonitoring efforts provide little information on children and that children may be more vulnerable to certain chemicals than adults because (1) their biological functions are still developing and (2) their size and behavior may expose them to proportionately higher doses. In our recommendations, we indicate that EPA should prioritize data gaps, and we believe that the lack of data on children should be a priority. Regarding our recommendation that EPA should determine the extent of its legal authority to obtain biomonitoring data, EPA commented that a case-by-case explanation of its authority might be more useful than a global assessment of that authority. However, we continue to believe that an analysis of EPA’s legal authority to obtain biomonitoring data is critical. Fuller consideration of EPA’s authority is a necessary precondition of the two other recommendations that we make in this report, with which the agency agreed. That is, EPA would be best equipped to formulate a biomonitoring research strategy and contribute to an interagency task force if it were more fully aware of what data it can obtain. Furthermore, while we understand that EPA can clarify its authority to obtain biomonitoring data in individual regulatory actions, few such opportunities have arisen with regard to biomonitoring so far, and EPA provided no information suggesting it will have more opportunities to consider the issue in the near future. In addition, companies must sometimes submit chemical information independent of an EPA rule requiring submission of the data. For example, under section 8(e), chemical companies must submit certain adverse health and safety information at their own initiative. Such situations do not provide EPA with an initial opportunity to clarify its authority to obtain biomonitoring data. We continue to believe that formal written guidance would be useful in these circumstances. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other appropriate congressional committees, the EPA Administrator, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the extent to which the Environmental Protection Agency (EPA) incorporates data from human biomonitoring studies into its assessments of risks from chemicals, we reviewed relevant laws, agency policies and guidance, and our prior reports relevant to EPA’s assessment of chemicals and to EPA’s activities related to children’s health issues. In addition, we reviewed EPA’s prior and planned uses of these data, academic publications, National Academy of Sciences reports, and government and industry-sponsored conference proceedings to gain an understanding of the current state of biomonitoring research. We supplemented this information with that obtained from interviews with EPA officials working on biomonitoring and risk assessment issues in the Office of Research and Development, the Office of Children’s Health Protection, the Office of Water, the Office of Air and Radiation, the Office of Pesticide Programs, and the Office of Pollution Prevention and Toxics. To review how EPA addresses challenges that limit the usefulness of biomonitoring data for risk assessment and management activities, we collected documentation on EPA’s biomonitoring-related research efforts, including EPA’s Human Health Research Strategy, and financial and program data for grant programs that have funded biomonitoring research. In addition, we interviewed stakeholders—such as the Centers for Disease Control and Prevention (CDC) and the Children’s Health Protection Advisory Committee as well as the American Chemistry Council, the Environmental Defense Fund, and the Environmental Working Group—to gauge EPA’s involvement with a variety of stakeholders working to further biomonitoring research. To determine the extent to which EPA has the authority to obtain biomonitoring data from the chemical industry, we reviewed relevant legislation and prior legal actions, and interviewed officials from EPA’s Office of General Counsel to understand EPA’s authorities for collecting biomonitoring data from companies. We conducted this performance audit from October 2007 to April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. EPA has programs intended to increase its knowledge of the toxic effects and levels of human exposure to certain chemicals, such as the agency’s Inventory Update Reporting (IUR) rule and voluntary programs, such as the Voluntary Children’s Chemical Evaluation Program (VCCEP) and the High Production Volume Challenge Program (HPV Challenge Program). However, these programs have significant limitations and no clear link to biomonitoring. For example, EPA’s IUR rule is intended to gather more information on how existing chemicals are used and how they come into contact with people. However, the agency does not collect biomonitoring data as part of this program. Furthermore, in 2003 and 2005, EPA amended the rule in ways that may reduce the amount of certain information that companies report about chemicals they produce. Although the 2003 amendments added inorganic chemicals to the substances for which companies were required to report and required other potentially useful information, the agency also raised the reporting threshold. This threshold is the level of production above which a company must provide data on a chemical to EPA. The agency increased the threshold from 10,000 pounds at a single site to 25,000 pounds, which may reduce the number of chemicals for which companies provide production data to EPA. In 2005, the agency also reduced the frequency with which chemical companies must report their production volume of chemicals. Before 2005, companies were required to report the production volume every 4 years for a chemical that met the reporting threshold in the 4th year. In 2003, the agency changed the reporting requirement so that companies have to report every 5 years, thus reducing the availability of production volume data. As with the earlier rule, companies are only required to report data for a single year, not for any of the years prior to the reporting year. However, EPA officials are considering ways to collect additional production volume information, such as requiring companies to report production volume for each of the 5 years whenever a company meets the reporting requirement of 25,000 pounds of production for the 5th year. EPA did require chemical companies to report some new information when it made these changes in 2003. Companies must now supply additional information relating to the manufacture of the reported chemicals, such as the number of workers reasonably likely to be exposed to the chemical, and relating to the physical form and maximum concentration of the chemical. In addition, for those chemicals produced in quantities of 300,000 pounds or more at one site, companies must now report “readily obtainable” information on how the chemicals are processed or used in industrial, commercial, or consumer settings, including whether such chemicals will be found in or on products intended for children. However, the definition of “readily obtainable” excludes information that requires extensive file searches or surveys of the manufacturers that purchase the chemicals. Furthermore, an industry representative told us that it is often difficult for chemical companies to determine whether a chemical they produce will eventually be used in a product intended for children, since the companies do not directly sell children’s products and may not know how manufacturers will use their product. Therefore, it is unclear whether EPA will receive significant information as a result of this new reporting requirement. EPA has also attempted to collect data on toxicity and human exposure using voluntary programs. For example, in 2000 the agency launched VCCEP to ensure that it had adequate information to assess the potential risks to children posed by certain chemicals. EPA asked companies that produce or import 23 specific chemicals to volunteer to “sponsor” their chemical by making certain data on the chemical’s toxicity available to the public. The companies volunteered to sponsor 20 of the 23 chemicals. However, VCCEP has proceeded slowly and has not provided EPA with the data needed to interpret biomonitoring research. Of the 23 VCCEP chemicals, EPA has received what it deems to be sufficient data for only 6 chemicals. In addition, it has asked for additional data that some of the sponsors declined to provide. For example, one sponsor declined to conduct additional reproductive toxicity testing for 2 chemicals, which EPA needed to use biomonitoring data in exposure assessments. Several environmental and children’s health groups, including EPA’s Children’s Health Protection Advisory Committee, have stated that VCCEP has not met its goal of ensuring that there are adequate publicly available data to assess children’s health risks from exposure to toxic commercial chemicals. Specifically, the groups have noted the lack of risk-based prioritization for collecting chemical data; the lack of specific guidance and criteria for the sponsor-developed studies and data; inadequate involvement of stakeholders; and problems with accountability, credibility, and data transparency. In 2008, EPA requested public comments on the VCCEP program and held a listening session. Nonetheless, EPA is still considering what further actions to take and has not set a goal for when it will complete its review of the program. In another voluntary program, begun in 1998, EPA attempted to collect certain information on the health and environmental effects of high production volume (HPV) chemicals, which are those manufactured or imported in amounts of at least 1 million pounds per year. Approximately 3,000 chemicals meet this criterion. Before the start of the program, EPA found that data on basic toxicity were available for only 57 percent of these chemicals, and that the full set of six basic chemical safety tests (i.e., acute toxicity, chronic toxicity, reproductive toxicity, mutagenicity, ecotoxity, and environmental fate) were available for only 7 percent. This information is necessary for EPA to conduct even a preliminary screening- level assessment of the hazards and risks of these chemicals, and for it to interpret any relevant biomonitoring data. Through the HPV Challenge Program, EPA asked chemical manufacturers and importers to voluntarily sponsor chemicals by submitting information on the chemicals’ physical properties, environmental fate, and health and environmental effects. The agency also asked companies to propose a strategy to fill data gaps. However, the HPV Challenge Program has serious limitations. First, EPA has been slow to evaluate chemical risks. More than a decade after starting the program, the agency has completed “risk-based prioritizations” for only 151 of the more than 3,000 HPV chemicals. Risk-based prioritizations are preliminary evaluations that summarize basic hazard and exposure information known to EPA. The agency intends to use these evaluations to assign priorities for future action on the basis of the risks presented by these chemicals. Second, data on almost 300 HPV chemicals are lacking because they were not sponsored by any chemical company— these unsponsored chemicals are referred to as “orphans.” The exact number of HPV orphan chemicals changes over time, with changes in sponsorship and production. EPA can require companies that manufacture or process orphan chemicals to conduct tests, but it has done so for only 16 of these almost 300 chemicals. This is largely because it is difficult to make certain findings regarding hazard or exposure, which section 4 of TSCA requires before EPA may issue a “test rule.” However, EPA did issue a second proposed HPV test rule in July 2008 for 19 additional chemicals and anticipates proposing a third test rule in 2009 for approximately 30 chemicals. Third, the HPV Challenge Program does not include inorganic chemicals, or the approximately 500 emerging chemicals that reached the HPV production threshold after 1994. EPA recently introduced a proposal for an inorganic HPV program, but officials did not provide us with a date regarding when they expect to launch this program. Finally, EPA allowed chemical companies to group the chemicals they sponsored into categories and to apply testing data from only a handful of the chemicals to the entire category. Some environmental advocacy organizations have claimed that such categories will not adequately identify the hazards of all the chemicals in the category. Despite the limitations of the available data on toxicity and exposure, EPA plans by 2012 to conduct a basic screening-level assessment of the potential risks of more than 6,200 chemicals and to prioritize these chemicals for possible future action as the first step in its new Chemical Assessment and Management Program. EPA intends to apply the information on chemical hazards obtained from the HPV Challenge Program, among other programs, and extend its efforts to cover moderate production volume chemicals—those produced or imported in quantities of more than 25,000 and less than 1 million pounds per year. EPA plans to use any available biomonitoring data to help prioritize the chemicals for further review but does not have a formal plan for doing so. Although EPA has occasionally used biomonitoring in connection with these voluntary programs, it is not attempting to use these programs as a means to make biomonitoring data more useful. To do so, the agency would not only have to collect data more effectively from companies, but also collect the specific kinds of data that would allow it to understand the human health implications of biomonitoring data. In addition to the contact named above, Ed Kratzer, Assistant Director; Elizabeth Beardsley; David Bennett; Antoinette Capaccio; Crystal Huggins; Karen Keegan; Ben Shouse; and Peter Singer also made important contributions to this report. | Biomonitoring, which measures chemicals in people's tissues or body fluids, has shown that the U.S. population is widely exposed to chemicals used in everyday products. Some of these have the potential to cause cancer or birth defects. Moreover, children may be more vulnerable to harm from these chemicals than adults. The Environmental Protection Agency (EPA) is authorized under the Toxic Substances Control Act (TSCA) to control chemicals that pose unreasonable health risks. GAO was asked to review the (1) extent to which EPA incorporates information from biomonitoring studies into its assessments of chemicals, (2) steps that EPA has taken to improve the usefulness of biomonitoring data, and (3) extent to which EPA has the authority under TSCA to require chemical companies to develop and submit biomonitoring data to EPA. EPA has made limited use of biomonitoring data in its assessments of risks posed by commercial chemicals. One reason is that biomonitoring data relevant to the entire U.S. population exist for only 148 of the over 6,000 chemicals EPA considers the most likely sources of human or environmental exposure. In addition, biomonitoring data alone indicate only that a person was somehow exposed to a chemical, not the source of the exposure or its effect on the person's health. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand if the levels measured in people pose a health concern, but EPA's ability to require chemical companies to develop such data is limited. Thus, the agency has made few changes to its chemical risk assessments or safeguards in response to the recent increase in available biomonitoring data. While EPA has initiated several research programs to make biomonitoring more useful to its risk assessment process, it has not developed a comprehensive strategy for this research that takes into account its own research efforts and those of the multiple federal agencies and other organizations involved in biomonitoring research. EPA does have several important biomonitoring research efforts, including research into the relationships between exposure to harmful chemicals, the resulting concentration of those chemicals in human tissue, and the corresponding health effects. However, without a plan to coordinate its research efforts, EPA has no means to track progress or assess the resources needed specifically for biomonitoring research. Furthermore, according to the National Academy of Sciences, the lack of a coordinated national research strategy has allowed widespread chemical exposures to go undetected, such as exposures to flame retardants. The development of such a strategy could enhance biomonitoring research and link data needs with collection efforts. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is untested and may be limited. The TSCA provision that authorizes EPA to require companies to develop data focuses on the health and environmental effects of chemicals. Since biomonitoring data alone may not demonstrate the effects of a chemical, EPA may face difficulty in using this authority to obtain biomonitoring data. It may be easier for EPA to obtain biomonitoring data under other TSCA provisions, which allow EPA to collect existing information on chemicals. For example, TSCA obligates chemical companies to report information that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable if the chemical in question is known to have serious toxic effects and biomonitoring information indicates a level of exposure previously unknown to EPA. EPA took action against a chemical company under this authority in 2004. However, the action was settled without an admission of liability by the View GAO-09-353 or key components. company, so EPA's authority to obtain biomonitoring data remains untested. |
Fusion is the energy source that powers the sun and stars. Fusion occurs when the nuclei of two light atoms collide with sufficient energy to overcome their natural repulsive forces and fuse together. Scientists are currently using deuterium and tritium—two hydrogen isotopes—for this reaction. When the nuclei of the two atoms collide, the collision produces helium and a large quantity of energy (see fig. 1). For the fusion reaction to take place, the atoms must be heated to very high temperatures— about 100 million degrees centigrade, or 10 times the temperature of the surface of the sun—and placed under tremendous pressure. For more than 50 years, the United States has been trying to control fusion to produce electricity. The United States is pursuing two paths to achieve controlled fusion—magnetic and inertial. The goal for both approaches is to generate more energy than is needed to begin and sustain the fusion reaction. The world’s first controlled release of fusion power was achieved in 1991, but no fusion device has succeeded in generating more power than it consumes. Magnetic fusion uses magnetic devices to confine a plasma, consisting of electrically charged atoms, and sustain a fusion reaction. ITER will be a magnetic fusion device known as a “tokamak.” To reduce the risk of investing in only one device, DOE’s Office of Fusion Energy Sciences also funds scientific research on alternative types of magnetic devices. Inertial fusion relies on intense lasers or particle beams to heat and compress a small, frozen pellet of deuterium and tritium—a few millimeters in size—that would yield a burst of energy. The lasers or particle beams would continuously heat and compress the pellets, which would simulate, on a very small scale, the actions of a hydrogen bomb. The National Nuclear Security Administration, a separately organized agency within DOE, is leading efforts in inertial fusion because it can be used for defense needs, such as validating the integrity and reliability of the U.S. nuclear weapons stockpile. ITER is considered to be the next step in magnetic fusion. It is an experiment to study fusion reactions in conditions similar to those expected in a future electricity-generating power plant. The goal is to be the first fusion device in the world to produce a substantial amount of net power—that is, produce more power than it consumes. Specifically, the objective is to produce 10 times more power than is needed to start the fusion reaction in pulses of 5 or more minutes. ITER also will test a number of key technologies, including the heating, control, and remote maintenance systems that will be needed for a fusion power station. ITER has been planned to consist of four phases: (1) construction, (2) operation, (3) deactivation, and (4) decommissioning. The construction phase, which is the sole focus of the U.S. ITER Project, began in 2007 (see fig.2 for an aerial view of construction progress at the ITER site as of June 2013). The international project schedule, as of April 2014, anticipates that the ITER fusion device will be built by 2019 and achieve its “first plasma” in 2020.next several years are expected to be devoted to a preliminary period of operation in pure hydrogen during which physics testing will be done, followed by operation in deuterium with a small amount of tritium to test ITER’s wall shielding. This will then be followed by the start of full ITER operations in an equal mixture of deuterium and tritium, at which point ITER will be used to try to produce 10 times more power than it consumes. As of April 2014, the international project schedule anticipates the start of deuterium-tritium operations in 2027. ITER’s operation phase is expected to last 20 years followed by a 5-year deactivation phase and then a decommissioning phase. If ITER is successful, it will lead to power plant design and testing. The estimated cost and schedule of the U.S. ITER Project has grown substantially since the ITER Agreement was signed in 2006 (see fig. 5), and DOE has identified several reasons for these changes. At the time the ITER Agreement was signed, DOE planned on spending $1.122 billion on the U.S. ITER Project and expected to complete the project in 2013 based on preliminary estimates it approved in 2005. In 2008, DOE formally increased its preliminary cost estimate to a cost range of from $1.45 billion to $2.2 billion. This was the most recent time DOE approved a cost estimate for the U.S. ITER Project. Also, in 2008, DOE said it expected ITER to achieve first plasma in 2016 and expected the U.S. ITER Project to be completed in 2017. DOE now estimates that the U.S. ITER Project will cost $3.915 billion, that ITER will achieve first plasma in 2023, and that the U.S. ITER Project will be completed in 2033. These estimates were developed based on a set of key assumptions that were not used by DOE in its 2008 estimates. The current estimates include an assumption of annual funding for the U.S. ITER Project of $225 million, starting in fiscal year 2015 and continuing until the project is completed, with no adjustment for inflation; limited U.S. cash contributions to the ITER Organization from 2014 to 2016; future U.S. contributions to the ITER Organization for ITER’s operation, deactivation, and decommissioning phases coming out of the $225 million annual funding level; and a hardware delivery schedule not tied to the ITER Organization’s schedule. DOE also instructed the U.S. ITER Project Office to use its estimate of ITER achieving first plasma in 2023 rather than the 2020 date in the ITER Organization’s schedule, and to use its best estimate for ITER Organization construction costs rather than the budget most recently approved by the ITER Council. Nonetheless, these current estimates remain preliminary, according to DOE officials, because DOE has not approved a performance baseline for the U.S. ITER Project. A performance baseline captures a project’s key performance, scope, cost, and schedule parameters, and represents a commitment from DOE to Congress to deliver a project within those parameters. According to DOE documents, the current $3.915 billion cost estimate for the U.S. ITER Project includes the following: $1.469 billion (38 percent) to complete the remaining work to procure and deliver U.S. hardware components for ITER; $928 million (24 percent) in contingency to address potential schedule delays or increases in costs for manufacturing components, including $852 million in contingency for the remaining work to procure and deliver U.S. hardware components, and $76 million in contingency for U.S. cash contributions to the ITER Organization; $541 million (14 percent) to account for project costs through June $519 million (13 percent) for remaining cash contributions to the ITER Organization to pay for scientists, engineers, and support personnel working at the ITER Organization; the assembly and installation of the components in France to build the reactor; quality assurance testing of all ITER member-supplied components; and contingencies; and $458 million (12 percent) for escalation costs, such as changes in currency exchange rates and commodity prices, which are driven by the extended length of the project due to the funding assumptions used to develop the cost estimate. DOE documents and officials identified several key reasons for the growth of the cost and schedule estimates for the U.S. ITER Project. DOE officials identified a number of reasons that led to the change in the project’s initial 2005 cost estimate of $1.122 billion to its 2008 cost range of from $1.45 billion to $2.2 billion. These included (1) higher estimates for the cost of U.S. hardware components as their designs and requirements were more fully developed; (2) updated estimates for external factors such as currency exchange rates and commodity prices; (3) changes to U.S. hardware component requirements and the international project schedule due to a 2007 design review of the overall ITER project; and (4) additions for contingency and recognized risks. According to DOE documents and officials, the primary reasons the U.S. ITER Project’s cost estimate grew from the upper range of $2.2 billion in 2008 to the current $3.915 billion estimate included the following: Higher estimates for U.S. hardware components as designs and requirements have been more fully developed over time: The design and requirements for U.S. hardware components have evolved over time, and the cost estimates for these components have changed as a result, according to DOE documents and officials. Specifically, the current U.S. ITER Project cost estimate includes about $770 million more than the 2008 figures due to greater understanding of what U.S. hardware components are likely to cost. According to DOE officials, expected hardware costs now reflect more fully developed designs, better industry estimates of the cost of producing those designs, and greater understanding of project risks, among other things. DOE documents noted that, as of February 2014, about two-thirds of U.S. hardware components by value were in final design or beyond, although there were some smaller components that were still in earlier stages of design. According to DOE officials, because of the progress in developing hardware component designs, the project team has a greater understanding of what those components are likely to cost and has reflected that understanding in the current $3.915 billion cost estimate. Higher contingency amounts added to address risks: The U.S. ITER Project Office has increased the amount of contingency in the current cost estimate to address the risks from the project’s significantly longer schedule and to increase confidence in the estimate, according to DOE documents and officials. Specifically, the current estimate includes about $681 million more in contingency than was in the U.S. ITER Project cost estimate in 2008. According to DOE officials, the amount of contingency is based on detailed risk analyses as well as a management assessment of key assumptions. DOE officials told us that, to develop contingency amounts for the current estimate, the U.S. ITER Project Office identified and evaluated program-level risks, such as changes in currency exchange rates and growth in ITER Organization cash contribution requirements. They also considered project-level risks, such as the U.S. ITER Project’s dependence on other ITER members or the ITER Organization for inputs needed to complete U.S. hardware components and potential procurement and manufacturing difficulties. Further, they added contingency amounts if U.S. ITER Project Office management representatives thought initial contingency amounts were too low for the risks associated with the project’s longer schedule or based on their past experience with large science projects. Schedule delays: U.S. schedule delays due to international project schedule delays and U.S. funding constraints accounted for about $544 million of the increase in the U.S. ITER Project’s cost estimate since DOE last approved the estimate in 2008, according to DOE documents and officials. First, international project schedule delays have lengthened the U.S. ITER Project schedule and have also led to increases in the cost estimate. For example, in 2007, an international review identified extensive changes that were needed in ITER’s design, and that significantly delayed the ITER Organization from defining requirements for U.S. hardware components. This in turn created delays in the U.S. ITER Project schedule for procuring and delivering those components, which led to higher cost estimates for those components. Second, U.S. funding constraints resulting from the project’s most recent $225 million per year funding plan and lower-than-requested funding levels in some years have lengthened the U.S. ITER Project schedule, according to DOE documents and officials. This in turn has made it necessary for the U.S. ITER Project Office to build additional amounts into the project’s cost estimate to account for higher escalation costs and the longer period of time the U.S. ITER Project workforce will be needed. For example, DOE officials told us that the project’s most recent $225 million per year funding plan reflected discussions between DOE, the Department of State (State), the Office of Management and Budget (OMB), and the Office of Science and Technology Policy (OSTP) to provide enough funding to meet U.S. obligations to ITER and reduce the amount of the U.S. fusion program budget and the overall DOE budget that had to be devoted to the U.S. ITER Project on an annual basis. However, this lengthened the U.S. ITER Project’s schedule and created procurement inefficiencies, resulting in increases in the project’s overall cost estimate. Higher cash contributions to the ITER Organization due to growth in ITER construction costs: The U.S. ITER Project Office built an additional $348 million into the current cost estimate to reflect the increase in U.S. cash contributions it expects to have to make to the ITER Organization, according to DOE documents and officials. DOE officials explained this increase includes $169 million for the U.S. share of a previously approved increase in the ITER Organization’s construction budget, as well as $179 million for the U.S. share of a potential future billion euro increase in the ITER Organization construction budget and the anticipated cost increases for the ITER Organization staff based on a 2023 first plasma date. The United States has taken on additional hardware responsibilities: DOE agreed to take on additional hardware responsibilities, accounting for $39 million of the increase in the U.S. ITER Project cost estimates since 2008, according to DOE documents and officials. DOE officials told us that there is a cost cap for each of these additional hardware responsibilities, and that the ITER Organization will be responsible for paying any amounts that exceed the caps. As a result, they view taking on these additional hardware responsibilities as a way to reduce uncertainty about future U.S. ITER Project costs because DOE will not have to spend more than the amounts specified in the cost caps for each item. DOE’s current cost estimate for the U.S. ITER Project reflects most of the characteristics of a reliable cost estimate, and its schedule estimates reflect all characteristics of a reliable schedule. However, DOE’s estimates cannot be used to set a performance baseline that would commit DOE to delivering the project at a specific cost and date primarily because of some factors that DOE can only partially influence. The factors DOE can only partially influence include an unreliable international project schedule to which the U.S. schedule is linked and an uncertain U.S. funding plan. DOE has taken some action to address the factors that have prevented it from setting a performance baseline and finalizing its estimates, but significant challenges remain. DOE’s current cost estimate for the U.S. ITER Project reflects most of the characteristics of high-quality, reliable cost estimates as established by best practices documented in the GAO Cost Estimating and Assessment Guide. In addition, DOE’s current schedule estimates fully reflect the characteristics of high-quality, reliable schedule estimates as established by best practices documented in the GAO Schedule Assessment Guide.According to the guides, four characteristics make up reliable cost estimates—they are comprehensive, well-documented, accurate, and credible (see table 2). Similarly, four characteristics make up reliable schedule estimates—they are comprehensive, well-constructed, credible, and controlled. Cost and schedule estimates are considered reliable if each of the four characteristics is substantially or fully met. If any of the characteristics is not met, minimally met, or partially met, then the estimates do not fully reflect the characteristics of a high-quality estimate and cannot be considered reliable. DOE’s current cost estimate for the U.S. ITER Project—as developed by the U.S. ITER Project Office in August 2013—substantially met best practices for comprehensive, well-documented, and accurate estimates, but only partially met best practices for credible estimates. For example, DOE’s cost estimate substantially met best practices for documenting all assumptions that will influence costs (comprehensive), describing step by step how the estimate was developed (well-documented), and adjusting properly for inflation (accurate). However, the U.S. ITER Project Office did not conduct a complete sensitivity analysis on the cost estimate, and an independent cost estimate has not been conducted (credible). The U.S. ITER Project Office did identify four key assumptions from the estimate for sensitivity testing. However, the analysis did not include some cost elements that represent high percentages of the overall estimate, including some of the most expensive hardware components being built by the United States. For example, the sensitivity analysis did not include the tokamak cooling water system, which is the most expensive U.S. hardware component. Without a comprehensive sensitivity analysis that identifies how the cost estimate is affected by changes to its assumptions, DOE will not fully understand how certain risks can affect the cost estimate and potentially result in decisions based on incomplete information. In addition, DOE did not conduct an independent cost estimate to determine whether other estimating methods produce similar results. DOE policy does not require an independent cost estimate until it approves a performance baseline, which the agency does not expect to occur until late 2015. However, including an independent estimate is a best practice associated with credible cost estimates. Independent cost estimates are less likely to reflect organizational bias. They also incorporate adequate risk, which generally results in more conservative estimates due to higher estimated costs. Without such an independent cost estimate, DOE faces a greater risk of underfunding the project, which can lead to overall cost growth and schedule slippage. (See app. IV for the individual ratings of each cost estimating practice.) DOE’s current schedule estimates for the two most expensive U.S. hardware items—the central solenoid modules and the tokamak cooling water system—fully met best practices for comprehensive schedules and substantially met best practices for well-constructed, credible, and controlled schedules. For example, DOE’s schedule estimate fully met best practices for capturing and establishing the duration of all activities (comprehensive), and substantially met best practices for sequencing all activities (well-constructed), conducting a schedule risk analysis (credible), and updating the schedule with actual progress (controlled). However, the schedule estimate partially met best practices for horizontal and vertical traceability, maintaining a baseline schedule, and ensuring reasonable total float, which is the amount of time an activity can be delayed before the dates of the program’s completion milestones are affected. U.S. ITER Project Office representatives acknowledged these issues with the schedule and attributed them to problems with the international project schedule. For example, according to project representatives, DOE’s schedule does not align with the international project schedule (i.e.: is not vertically traceable) because the international project schedule does not account for delays in ITER Organization delivery milestones, including a 30-month delay in ITER construction site preparations. Without up-to-date reliable international milestones, DOE cannot develop realistic U.S. milestones that align with the international project schedule and set a baseline that can provide a reliable, specific cost and completion date for the project. (See app. V for the individual ratings of each scheduling best practice). DOE considers its current cost and schedule estimates for the U.S. ITER Project to be preliminary, and these estimates cannot be used to set a performance baseline that would represent a commitment from DOE to Congress to deliver the project at a specific cost and date. DOE policy says that cost and schedule estimates are considered final only after a performance baseline has been approved for a project, and DOE has not approved a performance baseline for the U.S. ITER Project. According to DOE’s project management order, a performance baseline sets a bar against which a project’s progress can be measured. DOE’s target date for setting a performance baseline finalizing its cost and schedule estimates for the U.S. ITER Project has continually slipped from an original expected date of fiscal year 2007 to the current target of late in fiscal year 2015. That is when DOE expects the international project schedule to be updated, according to DOE documents and officials. According to DOE documents and officials, DOE’s current estimates for the U.S. ITER Project cannot be used to set a performance baseline because of three factors, two of which DOE can only partially influence as follows: First, the overall international project schedule that DOE uses as a basis for the U.S. schedule is not reliable. In July 2010, the ITER Council approved an official schedule for the overall ITER project. However, an October 2013 management assessment of the ITER Organization determined that the international project schedule established in 2010 was not reliable. The assessment attributed the unreliable schedule, in part, to management deficiencies within the ITER Organization. For example, the assessment found that the ITER Organization’s senior management had insisted that the international project schedule not be changed even when staff had developed what they thought were more realistic schedules, and that staff had not been allowed to openly challenge the schedule. According to DOE officials, the ITER Organization plans to spend the next year reassessing the international project schedule and taking actions to address the identified management deficiencies, and it hopes to complete its schedule reassessment by June 2015. Second, DOE has not proposed a final, stable funding plan for the U.S. ITER Project. DOE’s most recent plan had been to provide a flat $225 million per year for the project, and that figure was the basis for its current cost and schedule estimates. However, DOE officials told us that this funding plan could potentially change depending on the outcome of the ITER Organization’s reassessment of the international project schedule. In March 2014, DOE requested $150 million for the U.S. ITER Project in fiscal year 2015, $75 million less than the $225 million per year funding plan. According to DOE documents, the $150 million request would allow the U.S. ITER Project to meet its fiscal year 2015 commitments to ITER but would not be enough for the project to meet the milestones set in the official international project schedule. Officials noted that if Congress provided less than DOE’s requested funding, the U.S. ITER Project schedule will slip further. Six of the 10 fusion energy and project management experts we interviewed said that identifying sufficient funding to execute the U.S. ITER Project poses a significant management challenge for DOE. The third factor that has kept DOE from setting a performance baseline finalizing its estimates is within the agency’s direct control. Specifically, an August 2013 internal peer review found that the methodologies used to develop DOE’s current cost and schedule estimates were appropriate, but that the estimates do not sufficiently consider all project risks and uncertainties.Office did not identify and quantify all risks, that its view of potential risk mitigation was too optimistic, and that the range of possible cost outcomes due to each individual risk factor was too narrow. Further, the review identified potential cost increases related to changing technical For example, the review found that the U.S. ITER Project requirements, uncertainty about the ITER Organization’s performance, and the dependence on other ITER members for production of items that are used in U.S. hardware components. To better account for these risks and uncertainties, the review added additional amounts to DOE’s current cost estimate of $3.915 billion and found that the U.S. ITER Project was more likely to cost from $4 billion to $6.5 billion. The reviewers recommended, among other things, that the U.S. ITER Project Office update its risk estimates to be more comprehensive and reevaluate its risk mitigation assessments before DOE approves a performance baseline for the U.S. ITER Project. In the absence of a performance baseline, DOE has developed a 2-year plan for the U.S. ITER Project that sets near-term cost and schedule targets to guide the project’s performance in fiscal years 2013 and 2014. represent DOE’s commitment to a specific cost and schedule for the U.S. ITER Project as a performance baseline would. Most of the fusion energy and project management experts we interviewed emphasized the importance of DOE approving a performance baseline for the U.S. ITER Project, with some experts noting that a performance baseline would provide a goal for all project stakeholders to work toward and might ease concerns about the uncertainty of the funding levels needed to complete the project. Several experts also told us that, until DOE approves a performance baseline for the U.S. ITER Project, there will continue to be uncertainty about the project’s direction. DOE officials told us that the interim 2-year plan also allows the agency to formally monitor project progress. DOE has taken some actions to address the factors preventing it from setting a performance baseline that would allow the agency to finalize its cost and schedule estimates for the U.S. ITER Project. However, project management and schedule deficiencies in the ITER Organization and uncertainty in the U.S. ITER Project funding plan continue to pose management challenges for the agency and delay its efforts to set a performance baseline. According to DOE officials, DOE has taken several actions to try to get the ITER Organization to address international project management and scheduling deficiencies. For example, DOE officials told us that their aggressive participation in early ITER Agreement negotiations led to the adoption of a biannual management assessment requirement. This has focused attention on international management deficiencies, resulting in several recommendations for improving the project. Further, DOE officials told us they have used ITER Council Management Advisory Committee meetings to introduce, communicate, and advance project management principles, such as competitive procurement actions, in an effort to improve ITER Organization project management. Additionally, DOE has developed position papers describing the agency’s concerns with ineffective ITER Organization scheduling and management and suggesting actions the ITER Organization could take to develop a reliable international project schedule and improve international management. For example, in a position paper on scheduling issues, DOE recommended the ITER Council direct the ITER Organization to focus on developing a short-term schedule, and defer long-term schedule development until lessons are learned from the short-term effort. DOE has provided the position papers to other ITER members and achieved some unofficial support, but the agency has not submitted a formal proposal on the suggested actions to the ITER Council, which could vote on and ultimately require the implementation of these actions. According to DOE officials, DOE has not submitted formal proposals because previous ITER Council Chairs delayed substantive discussions of issues such as schedule slippages and conducted meetings with a primary goal of obtaining consensus among ITER members. However, DOE officials said the ITER Council and other ITER members are aware of DOE’s position, and they hope the new ITER Council Chair, who took over in January 2014, will change the way the ITER Council operates. DOE officials also said the ITER Council approved at the November 2013 ITER Council meeting the initiative for the ITER Organization to develop a short-term annual work plan for 2014, the results of which will inform long-term schedule development, and that all milestones had been met by all seven ITER members for the first three months. Even so, challenges remain that will continue to hamper DOE’s ability to develop a baseline for the U.S. ITER Project. Eight out of the 10 fusion energy and project management experts we interviewed said DOE does not have enough information from the ITER Organization and other ITER members to effectively plan the U.S. ITER Project, and 7 of the 10 experts said the international structure and management issues contribute to DOE’s management challenges. In this context, DOE’s efforts may have helped improve ITER Organization project management and helped jump-start efforts to develop a reliable international project schedule, but a reliable international project schedule is not expected until June 2015, as previously noted. Further, the previous international project schedule developed by the ITER Organization and approved by the ITER Council in 2010 has not proved reliable, and management issues that were identified in previous years continue to pose challenges at the international level. For example, the October 2013 management assessment of the ITER Organization found that the ITER Council had not acted on many recommendations for project management improvements from a previous management assessment in 2009 and that the problems identified in that assessment continue. The most recent management assessment attributed the inaction to the ITER Council’s reliance on consensus decision making, which caused it to avoid or delay difficult decisions. The 2013 assessment stated that the ITER members needed to openly discuss and then make decisions on difficult issues at ITER Council meetings, even if there is no consensus, and all members should be held accountable for results. The management assessment contained 11 recommendations that were designed to be taken together. The assessment further said that the international project would not achieve significant improvement if the ITER Council only adopted a few recommendations, as has been the case with recommendations in previous management assessments. DOE officials told us that the ITER Council had approved proposals to address the recommendations from the October 2013 management assessment and that the ITER Council Chair was actively monitoring the implementation of the recommendations. To address the uncertainty of the U.S. funding plan for the U.S. ITER Project, DOE has evaluated a range of funding scenarios for executing the project. As previously noted, DOE most recently developed a $225 million per year flat funding plan with State, OMB, and OSTP, but DOE officials acknowledged that the plan constrained funding for the project to allow DOE to meet U.S. obligations to ITER and reduce the amount of the U.S. fusion program and overall DOE budgets devoted to the project annually. DOE officials told us the flat funding plan created a long and inefficient schedule and has created gaps between the design and fabrication stages of some systems, which has led to cost growth. DOE officials said that they will not be able to meet the most recent international project schedule under this funding plan, and delivery of some U.S. components will likely be late under the plan. DOE officials told us they plan to develop a final, stable funding plan for the U.S. ITER Project, but that plan can only be developed if the international project schedule is reliable. To ensure that all risks and uncertainties are sufficiently incorporated into its estimates, DOE officials told us the U.S. ITER Project Office held a series of risk workshops. According to DOE officials, as of March 2014, the U.S. ITER Project Office had held risk workshops on U.S. hardware components and associated risks, as well as a workshop on external risks. U.S. ITER Project Office representatives told us that they are currently analyzing the results of the workshops and that the workshops will ultimately lead to updated risk estimates and an update to the current cost estimate of $3.915 billion for the U.S. ITER Project. DOE has taken several actions to reduce the cost of the U.S. ITER Project. Some fusion energy and project management experts we interviewed suggested additional strategies DOE could pursue that might further reduce U.S. ITER Project costs or improve project management. DOE has not adequately planned for the potential impact of U.S. ITER Project costs on the overall U.S. fusion program because it has not completed a strategic plan that would clarify the program’s priorities given those costs. According to DOE documents and officials, DOE has taken several actions to reduce the cost of the U.S. ITER Project by about $388 million as of February 2014, including the following: Value engineering: The U.S. ITER Project Office has identified ways to design U.S. hardware components that lower costs but maintain the component’s essential functions. According to DOE officials, this strategy—known as “value engineering”—has been an integral part of the U.S. design effort. For example, the U.S. ITER Project Office has been able to reduce the cost of the central solenoid magnet system by more than $18 million by eliminating, simplifying, or reducing the number of some parts. It also has reduced the cost of the vacuum auxiliary systems by almost $34 million by, among other things, revising test equipment items and quantities and reducing the number of system connections and pumps. According to DOE officials, these and other value engineering efforts have resulted in about $225 million in savings as of February 2014. DOE documents indicated that about half of the savings come from value engineering the design of the tokamak cooling water system. Centralized and consolidated procurement for certain items: The U.S. ITER Project Office has agreed to have the ITER Organization centrally procure piping for the tokamak cooling water and vacuum systems for which the United States is responsible. It has also reached agreement with other ITER members to consolidate procurement of certain common parts rather than having each ITER member procure those parts for their assigned hardware components. For example, the U.S. ITER Project Office has agreed to have the European Union procure all cable trays needed for U.S. hardware components. According to DOE officials, centralized and consolidated procurement for certain items has saved the U.S. ITER Project about $120 million as of February 2014. Scope transfers and reallocations: The U.S. ITER Project Office has reached agreements to transfer some U.S. hardware responsibilities to the ITER Organization and to reallocate some hardware responsibilities among ITER members to improve procurement efficiency and reduce U.S. costs. For example, project officials told us that they reached an agreement to shift the U.S. responsibility for procurement of one system to other ITER members, with the United States taking on more engineering work for that system in return. DOE officials estimated that this resulted in $20 million in savings. According to DOE officials, scope transfers and reallocations have saved about $43 million as of February 2014. Other strategies: According to DOE documents and officials, the U.S. ITER Project Office has used several other strategies to reduce U.S. ITER Project costs. These strategies have included implementing lessons learned and leading practices from other large projects; working with the ITER Organization on cost reduction initiatives to improve ITER Organization processes and requirements; and minimizing costs associated with project execution by, for example, providing incentives in contract provisions. DOE officials said that it was too soon to quantify the cost savings resulting from these actions. According to DOE documents and officials, these cost savings are reflected in the current $3.915 billion cost estimate for the U.S. ITER Project. DOE officials told us that cost containment will continue to be a high priority for the project. They told us that the most significant opportunity to further reduce U.S. ITER Project costs would be the adoption of an optimal funding plan for the project. Officials explained that $458 million of the current $3.915 billion cost estimate is included to cover escalation costs, and an optimal funding plan could potentially reduce those and other costs by allowing the U.S. ITER Project to be completed in a shorter period of time. Some of the 10 fusion energy and project management experts that we interviewed identified strategies DOE could pursue to further reduce the cost of the U.S. ITER Project. The following two strategies were suggested by several of these experts: Six experts suggested that DOE could reduce U.S. ITER Project costs by adopting an optimal funding plan for the project. The optimal funding plan being suggested would involve different dollar figures year to year rather than DOE’s most recent strategy of funding the U.S. ITER Project at a flat $225 million per year starting in fiscal year 2015. Some experts noted that a funding plan that scaled up in the near-term and then scaled down in later years could potentially reduce overall U.S. ITER Project costs by hundreds of millions of dollars. Specifically, three experts said that an optimal funding plan could shorten the current U.S. ITER Project schedule, and that doing so would reduce overall project costs. Two experts noted that one issue with this strategy would be that the U.S. ITER Project would need to receive more funding in some years, and that could lead to funding cuts for other DOE projects and programs. To address that issue, one expert suggested DOE not propose an optimal funding plan for the U.S. ITER Project until the agency determines that there is a reliable international project schedule and that the ITER Organization has made significant progress improving its management of the overall ITER project. Another expert suggested that DOE first communicate its plans for managing the impact on the overall U.S. fusion program of higher U.S. ITER Project funding in certain years before proposing an optimal funding plan. When we discussed this strategy with DOE program officials, they agreed that an optimal funding plan for the project could shorten the U.S. ITER Project schedule and potentially reduce costs significantly. However, they emphasized that the most recent $225 million annual funding plan was an attempt to balance the funding needs of the U.S. ITER Project and the rest of the U.S. fusion program. Three experts suggested that DOE could reduce U.S. ITER Project costs by working with its international partners to develop a reliable international project schedule and aligning U.S. ITER Project efforts with that schedule. One expert noted that this would help make DOE’s cost estimate for the U.S. ITER Project more certain. Another expert told us that a reliable international project schedule was necessary for DOE to set an optimal funding plan, which in turn could help minimize U.S. ITER Project costs. The expert said that aligning the U.S. ITER Project with a reliable international project schedule was necessary to ensure that the United States was not producing its hardware components too early or too late, either of which could result in cost growth. When we discussed this strategy with DOE program officials, they said that the United States has actively presented its views to its international partners on what would be a realistic schedule for the overall ITER project. Further, the officials told us that they do try to align U.S. ITER Project efforts with the international project schedule in spite of imperfect knowledge due to the current international project schedule not being reliable. Some of the 10 experts we interviewed also suggested strategies that DOE could pursue to improve its management of the U.S. ITER Project. The following two strategies were mentioned by several experts: Six experts suggested DOE establish a separate office that would report directly to top DOE management officials to provide oversight of the U.S. ITER Project. Two of these experts said that having a separate office would give the U.S. ITER Project greater visibility at the highest levels of DOE. Another expert said a benefit would be to enhance the project’s interaction with stakeholders, including Congress and the U.S. fusion community. Further, two experts told us that a separate office for the U.S. ITER Project would enhance DOE’s ability to oversee the project’s complexity and complications given the international structure the project operates within. When we discussed this strategy with DOE program officials, they said that a separate DOE office to oversee the U.S. ITER Project would not provide many benefits and could have unintended consequences. Specifically, they said that the project already has high visibility with top DOE management officials and that creating a separate office could result in a greater degree of funding competition between the U.S. ITER Project and the rest of the U.S. fusion program. Four of the 10 experts we interviewed said DOE should make more information on the U.S. ITER Project available to stakeholders in the U.S. fusion community. One expert said DOE should be more forthcoming about what the agency expects the U.S. ITER Project to cost and how they plan to pay for it. Two other experts suggested that DOE should disclose its internal peer reviews of the U.S. ITER Project. Several of the experts we interviewed also identified a number of negative effects of DOE not being sufficiently transparent about the U.S. ITER Project. These included the erosion of stakeholder commitment to the project and a diminished ability for stakeholders to effectively plan research efforts and make informed funding decisions related to the impact of project costs on the overall U.S. fusion program. When we discussed this strategy with DOE program officials, they said they have shared a substantial amount of information on the U.S. ITER Project with stakeholders and were trying to share more. For example, they noted that DOE included a detailed section on the U.S. ITER Project in its fiscal year 2015 budget request. In some cases, however, officials said they were limited in the information they could share with stakeholders by the international sensitivities of the overall ITER project, by procurement sensitivities, and by the nature of the budget process. DOE has not adequately planned for the potential impact of U.S. ITER Project costs on the overall U.S. fusion program, although it has taken some steps. For example, as previously mentioned, the project’s most recent $225 million per year funding plan was an attempt by DOE and other parts of the administration to meet U.S. obligations to ITER and reduce the project’s impact on the U.S. fusion program and DOE’s overall budget. However, according to agency officials, DOE has not completed a strategic plan for the U.S. fusion program to clarify the program’s goals and priorities and its proposed approach for meeting them in light of the potential impact of U.S. ITER Project costs. The House and Senate Appropriations Committees directed DOE to submit a 10-year strategic plan for the U.S. fusion program in the explanatory statements that accompanied both the fiscal year 2012 and fiscal year 2014 energy and water development appropriation acts. In addition, DOE’s Fusion Energy Sciences Advisory Committee recommended in 2013 that DOE develop a strategic plan for the U.S. See Explanatory Statement, 160 Cong. Rec. H 878 (daily ed., Jan.15, 2014), to the Energy and Water Development and Related Agencies Appropriations Act, 2014, contained in Division D of the Consolidated Appropriations Act, 2014, Pub. L. No. 113-76; and H.R. Rep. No. 112-331, at 855 (Dec.15, 2011) (Conf. Rep.). The fiscal year 2014 explanatory statement further directed that the strategic plan DOE submitted should assume U.S. participation in ITER and assess its priorities for the domestic fusion program based on three funding scenarios. fusion program using the advisory committee process and with broad U.S. fusion community input. Further, 9 of the 10 fusion energy and project management experts we interviewed agreed that it would be useful for DOE to develop such a plan. DOE officials told us they have not completed a strategic plan for the U.S. fusion program to date, for three reasons. First, they said an effort in 2012 to obtain the Fusion Energy Sciences Advisory Committee’s input on U.S. fusion program priorities had been unsuccessful because the committee did not address program priorities under a constrained budget scenario due to conflict of interest issues. Second, DOE officials said there had been too much budget uncertainty in fiscal year 2013 regarding the U.S. ITER Project and the overall U.S. fusion program to complete a plan. They explained that the House and Senate Appropriations Committees proposed different direction and funding levels for the U.S. fusion program in fiscal year 2013, and these differences were not resolved until the passage of the fiscal year 2014 appropriations act in January 2014. Third, DOE officials said an effort in 2012 and 2013 to develop a high- level strategic document for the U.S. fusion program was unsuccessful because OMB did not concur with the document developed by DOE. DOE officials said they are in the early stages of developing a strategic plan in response to the House and Senate appropriations committees’ direction in the explanatory statement that accompanied the fiscal year 2014 appropriations act. These officials said they would ask the Fusion Energy Sciences Advisory Committee in April 2014 to provide input on U.S. fusion program priorities by October 2014. According to the officials, DOE will consider the committee’s input in developing a strategic plan for the U.S. fusion program, and it hopes to finalize a plan no later than the January 2015 deadline set by the House and Senate Appropriations Committees. However, DOE officials could not provide a specific date when they expect to complete the strategic plan. We have previously reported that strategic planning is a leading management practice organizations can employ to define their mission and goals; help clarify priorities; address management and other challenges that threaten an agency’s ability to meet its long-term strategic goals; align activities, core processes, and resources to accomplish those goals; and foster informed communication between an agency and its stakeholders. Without a strategic plan for the overall U.S. fusion program that addresses DOE’s plans for managing the impacts of U.S. ITER Project costs, the agency does not have information to involve and help create a basic understanding among stakeholders— including Congress and the U.S. fusion community—about its plans for balancing the competing demands that confront the program with the limited resources available; better ensure that the U.S. ITER Project and other U.S. fusion program activities are aligned to effectively and efficiently achieve the program’s goals; and improve Congress’s ability to weigh the potential trade-offs of different funding decisions for the U.S. ITER Project and the overall U.S. fusion program within a constrained budget environment. DOE has taken some actions to address the factors that affect the reliability of its cost and schedule estimates for the U.S. ITER Project. However, more than 7 years and nearly $700 million after the ITER Agreement was signed, significant uncertainty remains about how much the U.S. ITER Project will cost, when it will be completed, and how DOE plans to manage the impact of the project’s costs on the overall U.S. fusion program. DOE’s current preliminary cost and schedule estimates met most characteristics of high-quality, reliable cost and schedule estimates, but the cost estimate was not fully credible. Specifically, the U.S. ITER Project Office did not include the most expensive U.S. hardware component in its sensitivity analysis and an independent cost estimate has not been conducted, which could result in DOE making decisions based on incomplete information and increase the risk of more cost growth. It is important for DOE to set a performance baseline for the U.S. ITER Project in order to finalize its cost and schedule estimates, provide a bar against which the project’s progress can be measured, and allow Congress to make well-informed funding decisions about the project within a constrained budget environment. However, DOE has not yet set a performance baseline for the U.S. ITER Project in part because the international project schedule is not reliable, a key factor that DOE can only partially influence. To its credit, DOE has taken several actions to push for a reliable international project schedule and improvements to ITER Organization project management. Nonetheless, the agency could do more to ensure the ITER Organization develops a reliable international project schedule and that ITER Organization project management deficiencies are addressed by making formal proposals to the ITER Council that address these issues and remaining vigilant about the timely implementation of the proposed improvements. Without a reliable international project schedule, DOE neither can propose a final, stable funding plan for the U.S. ITER Project, nor can it reasonably assure Congress that the project’s cost will not continue to grow and the schedule will not continue to slip. DOE has taken some steps to reduce the cost of the U.S. ITER Project and plan for the impact of the project’s cost on the overall U.S. fusion program. However, even though there has been repeated direction from the House and Senate Appropriations Committees going back more than 2 years and a recommendation from its own advisory committee to do so, DOE has not yet completed a strategic plan for the overall U.S. fusion program. Strategic planning is a leading practice that can help organizations clarify priorities and address challenges that threaten their ability to meet long-term strategic goals. Completing a strategic plan for the overall U.S. fusion program would reduce uncertainty by addressing DOE’s priorities for the program in light of U.S. ITER Project costs. Moreover, involving stakeholders, such as the Fusion Energy Sciences Advisory Committee, in the plan’s development would increase stakeholder understanding of DOE’s plans for balancing the competing demands that face the U.S. fusion program with the limited resources available. DOE is beginning the initial work on such a plan, but a similar effort that was started in 2012 did not result in a completed strategic plan for the U.S. fusion program, and the agency has not provided a specific date when it will complete its current effort. Without committing to a specific date, DOE may not complete a strategic plan for the U.S. fusion program in a timely manner and, without a completed strategic plan, DOE may face challenges ensuring that it has effectively aligned U.S. fusion program activities to achieve program goals. Further, Congress and the U.S. fusion community are likely to remain uncertain about DOE’s plans for balancing the competing funding demands of the U.S. ITER Project and the rest of the U.S. fusion program. To reduce uncertainty about the expected cost and schedule of the U.S. ITER Project and its potential impact on the U.S. fusion program, the Secretary of Energy should direct the Associate Director of the Office of Fusion Energy Sciences to take the following four actions: Direct the U.S. ITER Project Office to revise and update the project’s cost estimate to meet all characteristics of high-quality, reliable cost estimates. Specifically, the U.S. ITER Project Office should revise the project’s cost estimate to ensure it is credible by including a comprehensive sensitivity analysis that includes all significant cost elements and conducting an independent cost estimate; Develop and present at the next ITER Council meeting a formal proposal describing the actions DOE believes need to be taken to set a reliable international project schedule and improve ITER Organization project management. Continue to formally advocate for the timely implementation of those actions at each future ITER Council meeting until the ITER Council approves an updated international project schedule; Once the ITER Organization completes its reassessment of the international project schedule, use that schedule, if reliable, to propose a final, stable funding plan for the U.S. ITER Project, approve a performance baseline with finalized cost and schedule estimates, and communicate this information to Congress; and Set a specific date for completing, in a timely manner, a strategic plan for the U.S. fusion program that addresses DOE’s priorities for the overall U.S. fusion program in light of U.S. ITER Project costs, and involve the Fusion Energy Sciences Advisory Committee in the development of the plan. We provided a draft copy of this report to DOE for review and comment. DOE provided written comments on the draft report on May 27, 2014, which are reproduced in appendix VI, and also provided technical and clarifying comments, which we incorporated as appropriate. DOE agreed with each of the report’s recommendations and said it has taken steps or plans to take additional steps to fully implement them. We are sending copies of this report to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Our review assessed: (1) how and why the estimated cost and schedule for the U.S. International Thermonuclear Experimental Reactor (ITER) Project have changed since 2006; (2) the reliability of the Department of Energy’s (DOE) current cost and schedule estimates for the U.S. ITER Project and the factors, if any, that have affected their reliability; and (3) the actions DOE has taken, if any, to reduce U.S. ITER Project costs and plan for their potential impact on the overall U.S. fusion program. To address these objectives and better understand the U.S. ITER Project, we reviewed the ITER Agreement, relevant laws, and DOE guidance and met with DOE and Department of State officials, representatives from the U.S. ITER Project Office, and fusion energy and project management experts from industry, DOE’s national laboratories, and universities. To determine how and why the estimated cost and schedule for the U.S. ITER Project have changed since 2006, we reviewed DOE and U.S. ITER Project Office documents. We assessed the reliability of the data on changes in the cost estimates by checking for obvious errors in accuracy and completeness; comparing the data with other sources of information; and interviewing DOE officials and U.S. ITER Project Office representatives who had knowledge of the data. We determined that DOE and the U.S. ITER Project Office’s data on changes in the cost estimates for the U.S. ITER Project were sufficiently reliable for reporting on the reasons for the changes in the estimates. We also contacted the national audit offices of each of the six other ITER members to identify any audit reports they had issued on ITER, and we reviewed each report that was identified. To evaluate the reliability of DOE’s current cost and schedule estimates and the factors, if any, that have affected their reliability, we reviewed DOE’s most recent cost and schedule estimates for the U.S. ITER Project—as developed by the U.S. ITER Project Office in August 2013— and DOE’s internal peer review of those estimates. We also reviewed DOE’s project management order and related guidance, as well as an October 2013 report to the ITER Council on the results of a management assessment of the ITER Organization. We assessed the reliability of DOE’s current cost and schedule estimates by analyzing the August 2013 estimates against the best practices identified in GAO’s Cost Estimating and Assessment Guide (Cost Guide) and Schedule Assessment Guide (Schedule Guide). Specifically, we determined the reliability of the cost estimate by reviewing documentation DOE submitted for the cost estimate, interviewing U.S. ITER Project Office representatives who prepared the estimate, reviewing relevant sources, and comparing the information collected with the best practices identified in the Cost Guide to determine whether the cost estimate was (1) comprehensive, (2) accurate, (3) well- documented, and (4) credible. assessed the extent to which the cost estimate met these best practices by calculating the assessment rating of each criteria within the four characteristics on a five-point scale: not met = 1, minimally met = 2, partially met = 3, substantially met = 4, and fully met = 5. Then, we took the average of the individual assessment ratings for the criteria to determine the overall rating for each of the four characteristics. The resulting average became the overall assessment as follows: not met = 0 to 1.4; minimally met = 1.5 to 2.4; partially met = 2.5 to 3.4; substantially met = 3.5 to 4.4; and fully met = 4.5 to 5.0. After conducting our initial analysis, we shared it with DOE officials and representatives from the U.S. ITER Project Office who developed the cost estimate to provide an opportunity for them to comment and identify reasons for observed shortfalls in cost estimating best practices. We took their comments and any additional information they provided and incorporated it into the assessments to finalize the scores for each characteristic and best practice. GAO designed the Cost Guide to be used by federal agencies to assist them in developing reliable cost estimates and also as an evaluation tool for existing cost estimates. To develop the Cost Guide, GAO cost experts assessed measures applied by cost-estimating organizations throughout the federal government and industry and considered best-practices for the development of reliable cost-estimates. schedules—the central solenoid modules schedule and the tokamak cooling water system schedule—that are used as inputs to the integrated master schedule. We selected these two schedules because they are the largest two hardware items, in terms of value, that the United States is responsible for contributing to ITER. We determined whether the schedules were (1) comprehensive, (2) well-constructed, (3) credible, and (4) controlled by reviewing documentation DOE submitted for the U.S. ITER Project schedule estimate, interviewing U.S. ITER Project Office representatives who developed the estimate, reviewing relevant sources, and comparing the information collected against the criteria for each of these characteristics identified in the Schedule Guide. We also analyzed schedule metrics as a part of that analysis to highlight potential areas of strengths and weaknesses against each of our four characteristics of a reliable schedule. In order to assess each schedule against the four characteristics and their accompanying 10 best practices, we traced and verified underlying support and determined whether the U.S. ITER Project Office provided sufficient evidence to satisfy the criterion and assigned a score based on the same five-point scale we used in our analysis of the cost estimate. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the characteristics, also using the same scale we used in our analysis of the cost estimate. After conducting our initial analysis, we shared it with DOE officials and representatives from the U.S. ITER Project Office who developed the schedule estimate to provide an opportunity for them to comment and identify reasons for observed shortfalls in schedule management best practices. We took their comments and any additional information they provided and incorporated it into the assessments to finalize the scores for each characteristic and best practice. By examining the two subordinate schedules against our guidance, we conducted a reliability assessment on each of the schedules and incorporated our findings on reliability limitations in the analysis of each subordinate schedule. We were also able to use the results of the two subordinate schedules to provide insight into the health of the integrated master schedule since the same strengths and weaknesses of the subordinate schedules would transfer to the master schedule. We determined that the schedules were sufficiently reliable for our reporting purposes, and our report notes the instances where reliability concerns affect the quality of the schedules. To examine the actions DOE has taken, if any, to reduce U.S. ITER Project costs and plan for their potential impact on the overall U.S. fusion program, we reviewed DOE and U.S. ITER Project Office documents on actions taken to reduce U.S. ITER Project costs, interviewed DOE program officials about the status of their efforts to complete a strategic plan for the U.S. fusion program, and reviewed meeting records of DOE’s Fusion Energy Sciences Advisory Committee. We also reviewed our prior work on leading practices in federal strategic planning for agency divisions, programs, or initiatives, as well as the House and Senate Appropriations Committees’ direction to DOE in the explanatory statements for the fiscal year 2012 and fiscal year 2014 energy and water development appropriation acts to submit a 10-year strategic plan for the U.S. fusion program. Further, we summarized the results of semistructured interviews with 10 experts in fusion energy and the management of large scientific research projects. To select these experts, we first identified 105 experts by reviewing the results of a literature search; congressional hearings; National Academies of Science publications; membership lists for the Fusion Energy Sciences Advisory Committee, the ITER Council Management Advisory Committee, the ITER Council Science and Technology Advisory Committee, and DOE’s internal peer reviews of the U.S. ITER Project; and recommendations from other fusion energy experts we interviewed. From this list, we then used a multistep process to select 10 experts.range of perspectives, we selected fusion energy and large scientific project management experts from industry, DOE’s national laboratories, and universities. We conducted semistructured interviews with the 10 selected experts using a standard set of questions and analyzed their responses, grouping them into overall themes. We summarized the results of our analysis and then asked DOE program officials for their views on actions suggested by multiple stakeholders to potentially reduce U.S. ITER Project costs or improve U.S. ITER Project management. Not all 10 of the experts answered all of our questions. The views expressed by experts do not represent the views of GAO. Appendix II lists the names and affiliations of the 10 experts we interviewed. To ensure coverage and a We conducted this performance audit from June 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Dr. Riccardo Betti, University of Rochester Dr. Aesook Byon, Brookhaven National Laboratory (retired) Dr. Richard W. Callis, General Atomics Dr. Adam Cohen, Princeton Plasma Physics Laboratory Dr. Ray Fonck, University of Wisconsin—Madison Dr. Charles M. Greenfield, General Atomics Dr. Martin Greenwald, Massachusetts Institute of Technology Dr. Richard Hawryluk, Princeton Plasma Physics Laboratory Dr. Robert Iotti, CH2M Hill (retired) Dr. Dale Meade, Princeton Plasma Physics Laboratory (retired) Purpose Confines, shapes, and controls the plasma inside ITER’s vacuum vessel. Provides the measurements necessary to control, evaluate, and optimize plasma performance and to further the understanding of plasma physics. Limits the effect of plasma current disruptions to the tokamak vacuum vessel and other components. Brings additional heat to the plasma and deposits heat in specific areas of the plasma to minimize instabilities. Brings additional heat to the plasma. Pellet injection system Provides efficient fueling and delivers hydrogen, deuterium, or a deuterium/tritium mixture as required by plasma operations. Exhausts certain parts of the ITER machine. Supplies the electricity needed to operate the entire ITER plant, including offices and the operational facilities. Manages temperatures generated during the operation of the tokamak. Separates tokamak exhaust gases into a stream containing only hydrogen isotopes and a stream containing only non-hydrogen gases. Part of toroidal field magnet that confines, shapes, and controls the plasma inside the ITER vacuum vessel. Creates low density in ITER’s vacuum vessel and connected vacuum components. 2023 “Final design” is the last stage of design development prior to implementation. At the final design stage, the project scope should be finalized and changes should be permitted only for compelling reasons. Best practice The cost estimate includes all life-cycle costs. The cost estimate completely defines the program, reflects the current schedule, and is technically reasonable. The cost estimate work breakdown structure is product-oriented, traceable to the statement of work/objective, and at an appropriate level of detail to ensure that cost elements are neither omitted nor double-counted. The estimate documents all cost-influencing ground rules and assumptions. The documentation should capture the source data used, the reliability of the data, and how the data were normalized. The documentation describes in sufficient detail the calculations performed and the estimating methodology used to derive each element’s cost. The documentation describes step by step how the estimate was developed so that a cost analyst unfamiliar with the program could understand what was done and replicate it. The documentation discusses the technical baseline description and the data in the baseline is consistent with the estimate. The documentation provides evidence that the cost estimate was reviewed and accepted by management. The cost estimate results are unbiased, not overly conservative or optimistic and based on an assessment of most likely costs. The estimate has been adjusted properly for inflation. The estimate contains few, if any, minor mistakes. The cost estimate is regularly updated to reflect significant changes in the program so that it is always reflecting current status. Variances between planned and actual costs are documented, explained, and reviewed. The estimate is based on a historical record of cost estimating and actual experiences from other comparable programs. The estimating technique for each cost element was used appropriately. The cost estimate includes a sensitivity analysis that identifies a range of possible costs based on varying major assumptions, parameters, and data inputs. A risk and uncertainty analysis was conducted that quantified the imperfectly understood risks and identified the effects of changing key cost driver assumptions and factors. Major cost elements were cross-checked to see whether results were similar. Best practice An independent cost estimate was conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. Best practice Capturing all activities. Tokamak cooling water system Fully met activities. Establishing the durations of all activities. Sequencing all activities. Confirming that the critical path is valid. Ensuring reasonable total float. Partially met Partially met Verifying that the schedule is traceable horizontally and vertically. analysis. actual progress and logic. schedule. In addition to the individual named above, other key contributors to this report were Dan Haas, Assistant Director; David Marroni; Andrew Moore; and Jacqueline Wade. Important contributions were also made by Cheryl Arvidson, Brian Bothwell, Nikki Clowers, Elizabeth Curda, R. Scott Fletcher, Cindy Gilbert, Jason Lee, Karen Richey, and Barbara Timmerman. | ITER is an international research facility being built in France to demonstrate the feasibility of fusion energy. Fusion occurs when the nuclei of two light atoms collide and fuse together at high temperatures, which results in the release of large amounts of energy. The United States has committed to providing about 9 percent of ITER's construction costs through contributions of hardware, personnel, and cash, and DOE is responsible for managing those contributions, as well as the overall U.S. fusion program. In fiscal year 2014, the U.S. ITER Project received $199.5 million, or about 40 percent of the overall U.S. fusion program budget. GAO was asked to review DOE's cost and schedule estimates for the U.S. ITER Project. This report examines (1) how and why the estimated costs and schedule of the U.S. ITER Project have changed since 2006, (2) the reliability of DOE's current cost and schedule estimates, and (3) actions DOE has taken to reduce U.S. ITER Project costs and plan for their impact on the overall U.S. fusion program. GAO reviewed documents; assessed DOE's current estimates against best practices; and obtained the perspectives of 10 experts in fusion energy and project management. Since the International Thermonuclear Experimental Reactor (ITER) Agreement was signed in 2006, the Department of Energy's (DOE) estimated cost for the U.S. portion of ITER has grown by almost $3 billion, and its estimated completion date has slipped by 20 years (see fig.). DOE has identified several reasons for the changes, such as increases in hardware cost estimates as designs and requirements have been more fully developed over time. DOE's current cost and schedule estimates for the U.S. ITER Project reflect most characteristics of reliable estimates, but the estimates cannot be used to set a performance baseline because they are linked to factors that DOE can only partially influence. A performance baseline would commit DOE to delivering the U.S. ITER Project at a specific cost and date and provide a way to measure the project's progress. According to DOE documents and officials, the agency has been unable to finalize its cost and schedule estimates in part because the international project schedule the estimates are linked to is not reliable. DOE has taken some steps to help push for a more reliable international project schedule, such as providing position papers and suggested actions to the ITER Organization. However, DOE has not taken additional actions such as preparing formal proposals that could help resolve these issues. Unless such formal actions are taken to resolve the reliability concerns of the international project schedule, DOE will remain hampered in its efforts to create and set a performance baseline for the U.S. ITER Project. DOE has taken several actions that have reduced U.S. ITER Project costs by about $388 million as of February 2014, but DOE has not adequately planned for the potential impact of those costs on the overall U.S. fusion program. The House and Senate Appropriations Committees have directed DOE to complete a strategic plan for the U.S. fusion program. GAO has previously reported that strategic planning is a leading practice that can help clarify priorities, and DOE has begun work on such a plan but has not committed to a specific completion date. Without a strategic plan for the U.S. fusion program, DOE does not have information to create an understanding among stakeholders about its plans for balancing the competing demands the program faces with the limited available resources or to help improve Congress' ability to weigh the trade-offs of different funding decisions for the U.S. ITER Project and overall U.S. fusion program. GAO recommends, among other things, that DOE formally propose the actions needed to set a reliable international project schedule and set a date to complete the U.S. fusion program's strategic plan. DOE agreed with GAO's recommendations. |
Scientists, industry officials, and land managers are recognizing that invasive species are one of the most serious, yet least appreciated, environmental threats of the 21st century. Expanding global trade and travel with countries such as Russia, China, and South Africa have resulted in rapid increases in the rate of introduction and number of newly established invasive species in the United States. While most of the plants and animals that make their way here are benign or even beneficial (for example, cattle, wheat, and tulips are all non-native species), the small proportion that become highly invasive have had huge economic and biological impacts. Damages resulting from invasive species may include power outages; loss of farmland property values; increased operating costs; and loss of sport, game, or endangered species. While the damages caused by these species have been considerable, their precise economic impacts—particularly those that do not damage agriculture, industry, or human health—are not well documented. However, a recent study by Cornell University scientists estimated the total annual economic losses and associated control costs to be about $137 billion a year—more than double the annual economic damage caused by all natural disasters in the United States. Because invasive species encompass plants, animals, and microbes, the problems they cause vary. The following examples demonstrate some of their impacts: On rangelands, leafy spurge, an invasive plant from Eurasia, crowds out desirable and nutritious forage, reduces land values, and degrades wildlife habitat. Annual damages from this weed are estimated to exceed $100 million in the Great Plains states. In U.S. forests, 19 of the 70 major insect pests are invasive species. Also, over the past several years, over 6,700 trees were destroyed in New York and Chicago after the discovery of the Asian long-horned beetle, an insect that most likely arrived in packing material or wood from China. According to USDA’s Agricultural Research Service (ARS), if this beetle and other wood-boring pests become fully established in the United States, they could damage industries that generate combined annual revenues of $138 billion. In freshwater habitats, aquatic invasive species, such as the zebra mussel, clog lakes and waterways and adversely affect fisheries, public water supplies, irrigation, water treatment systems, and recreational activities. Great Lakes water users spend tens of millions of dollars annually to control zebra mussels. In saltwater habitats, the European green crab has been associated with the demise of the soft-shell clam industry in New England. The green crab has recently been introduced to the West Coast where there is serious concern that it could affect shellfish aquaculture and Dungeness crab populations. In 1996, the most recent estimate, researchers calculated that the potential economic damage to shellfish production there could be as high as $44 million a year. A threat to humans and animals, the West Nile virus, commonly found in Africa, West Asia, and the Middle East, is an invasive virus now present in 12 eastern states and the District of Columbia. Birds are the natural hosts for this microbe, which mosquitoes transmit from infected birds to humans and other animals. While the ecological impacts of invasive species can be devastating, they are hard to quantify. However, many scientists believe that invasive species are a significant threat to biodiversity—second only to habitat loss and degradation. Further, they are a major or contributing cause of declines for almost half the endangered species in the United States. On February 3, 1999, President Clinton issued Executive Order 13112 on invasive species. Among other things, the order requires federal agencies to (1) prevent the introduction of invasive species and (2) detect, respond rapidly to, and control them in a cost-effective, environmentally sound manner. The order established a National Invasive Species Council—chaired by the Secretaries of Agriculture, Commerce and the Interior—with members including the Departments of State, Treasury, Defense, and Transportation, and the Environmental Protection Agency. The order directs the Council to provide national leadership on invasive species and to see that federal agency efforts are coordinated and effective. The Secretary of the Interior was also directed to form an advisory committee (the Invasive Species Advisory Committee) to provide information and advice to the Council. The order emphasizes the need for federal and state cooperation, as the states have a key role in managing invasive species within their borders. For example, in fiscal year 2000, Florida—which has a strong invasive species program—spent over $127 million on invasive species activities. States also retain general control over state lands and determine how they will address invasive species on their lands. The order also states that the Council shall develop recommendations for international cooperation. An effort already underway before the Council was established is the joint U.S./Canadian effort to combat the sea lamprey—an eel-like ocean fish that fastens onto other fish and eats until sated. Since 1956, the two governments have worked jointly through the Great Lakes Fishery Commission to control the spread of this invasive aquatic, which has had a detrimental impact on the Great Lakes fishery. The Council was also directed to prepare a National Invasive Species Management Plan. The plan, issued in January 2001, provides a general blueprint for dealing with invasive species and contains 57 recommendations—3 of which focus on rapid response. The Council’s member agencies obligated about $631.5 million in fiscal year 2000 on invasive species-related activities; USDA provided almost 90 percent of this amount. USDA’s and particularly APHIS’ programs are significant in their breadth and scope. For example, APHIS has jurisdiction over plant pests, certain biological control organisms, the import and export of plant species, and animals and animal diseases considered harmful or a threat to livestock or poultry health. In addition, the Forest Service, which manages about 191 million acres of federal land, has authority for forest and rangeland pest and plant control. Interior provided the second largest amount of federal invasive species funding, $31.1 million in fiscal year 2000, or 5 percent of the federal invasive species funding. Interior agencies—such as the Fish and Wildlife Service, Bureau of Land Management, National Park Service, and Bureau of Reclamation—are involved in regulating the import of animals found injurious under the Lacey Act, enforcing laws and regulations governing the import and export of wildlife into the United States, implementing actions to address aquatic invasive species, and managing invasive species on various publicly owned lands. Defense provided the third largest amount of funding, about 2 percent. As the fifth largest federal land manager, Defense is responsible for controlling invasive species infestations on its installations and uses native plants to restore Defense lands. In addition, the Army Corps of Engineers spends several million dollars annually for controlling invasive aquatic plants and zebra mussels and for supporting research to develop control technologies for managing these invasive species. All told, at least 20 different federal agencies share responsibility and authority over some facet of invasive species management. In addition, several interagency groups help coordinate activities in this area. Executive Order 13112 directs the Council to work with the: Aquatic Nuisance Species Task Force, which coordinates activities relating to aquatic invasive species; Federal Interagency Committee on the Management of Noxious and Exotic Weeds, which coordinates weed management efforts primarily on federal lands; and Committee on Environment and Natural Resources of the National Science and Technology Council, which coordinates research efforts. Invasive species management covers such activities as prevention, detection, control, restoration, research and development, information management, and public education. Prevention—the exclusion of invasive species from the country or from specified regions or ecosystems—is the first line of defense. When this fails, successful management often hinges on early detection and rapid response to an invasion. Eradication or containment of invasive species is most efficient, and sometimes only possible, at an invasion’s earliest stages. Once an area becomes altered, control activities, which may be costly, are needed to restore the habitat. Invasive species that threaten agricultural crops or livestock are far more likely to elicit a rapid response than those affecting mainly natural areas. As shown in table 1, APHIS provided most federal rapid response funding—an estimated $125.8 million out of a total $148.7 million reported for fiscal year 2000. About 90 percent of APHIS’ funding was directed at invasive species that primarily threaten agricultural crops or livestock; another 9 percent was spent on the Asian long-horned beetle, which primarily threatens forestry. Interior, second among federal departments in total funding for invasive species, estimated that its agencies provided about $1.4 million for rapid response activities. Its rapid responses were directed at species that threaten natural areas. Many rapid response needs are not being met, according to agency officials and others, particularly for invasive species that threaten natural areas. When these needs are not met, the consequences—to the economy and the environment—can be costly. Invasive species that threaten crops or livestock are the most likely to be quickly addressed since APHIS, which is responsible for protecting agriculture from invasive species, does the lion’s share of federal rapid response. In fiscal year 2000, APHIS estimated that it spent $125.8 million for rapid response—about 85 percent of the estimated $148.7 million federal agencies spent on this activity. About $113.7 million of APHIS’ funding went toward species that primarily threaten crops or livestock. All told, total federal rapid response funding for species that primarily affect agriculture was reported to be about $118 million. Most of APHIS’ rapid response funding was spent on relatively few invasive species. APHIS’ biggest expenditure, almost $81 million, was for citrus canker, a highly contagious bacterial disease that affects Florida’s citrus crops. This effort entailed tree removal, destruction, and replacement. Another $15 million went toward combating the glassy-winged sharpshooter, an insect that transmits Pierce’s disease, a disease of grapevines that threatens California’s grape and wine industry. While APHIS has lead responsibility for responding to invasive species that threaten agriculture, ARS funds research to support these activities. In fiscal year 2000, ARS spent $4.5 million on projects that involved, among other things, developing control methods and identifying species. For example, it spent $900,000 on research to support APHIS’ response to the glassy-winged sharpshooter. As shown in table 1, reported federal funding for invasive species that threaten forestry and other natural areas was about $30 million, compared to the $118 million spent on agriculturally related invasive species. A further breakdown of the $30 million shows that 80 percent of this amount was spent on two species that threaten forestry and related industries—the Asian long-horned beetle and the European gypsy moth. In total, federal rapid response funding for infestations affecting natural areas other than forests (for example, rangelands and aquatic areas) was estimated at $2.9 million for this period. The Forest Service was the chief contributor to efforts to protect forests (federal and nonfederal) from invasive species, obligating an estimated $15.1 million for rapid response and associated research for these activities. Its rapid responses included about $1.8 million for the Asian long-horned beetle and about $10.4 million for the European gypsy moth— an insect that has defoliated, and sometimes killed, hardwood trees in eastern forests. In addition, APHIS spent $11.8 million (about 9 percent of its rapid response funding) on species that primarily threatened forests. Almost all of this funding—about $11.5 million—was spent on efforts to eradicate the Asian long-horned beetle. ARS spent $660,000 on research to support rapid response to this beetle. Finally, rapid response funding for invasive species affecting natural areas other than forestry, such as rangelands or aquatic areas, was about $2.9 million. Interior estimated that it spent about $1.4 million for rapid response aimed at these activities. The Interior agencies that funded rapid response activities included the: Bureau of Land Management, which funded efforts directed at invasive plants that affect grazing, wildlife, and recreation on rangelands; Fish and Wildlife Service, which funded efforts directed at aquatic invasive species, such as Caulerpa taxifolia, an invasive aquatic plant that threatens native species and fishing in coastal waters, and the round goby, a Eurasian fish that has displaced native fish in parts of the Great Lakes; Bureau of Indian Affairs, which funded efforts directed at invasive plants on lands under its jurisdiction; Bureau of Reclamation, which funded efforts against giant salvinia, an aquatic plant from South America that degrades water quality, kills fish, and chokes out other plants; and U.S. Geological Survey, which funded research supporting rapid response directed at various species, such as the Asian swamp eel, a potential threat to native fish, frogs, and aquatic invertebrates in the Florida Everglades. The remaining funding for natural area infestations came from the Forest Service (for invasive plants on rangelands), APHIS (for noxious weeds in an Idaho wilderness area and for giant salvinia), ARS (for giant salvinia and three other species), and Commerce’s National Oceanic and Atmospheric Administration, which spent $100,000 to support a rapid response to Caulerpa taxifolia. In interpreting these funding estimates, it should be noted that many agency officials were uncertain as to which activities should be included in rapid response. For example, invasive species, such as leafy spurge, may exist in one area for a long time (where they are subject to control activities) and then appear in a new area where rapid response is required to eradicate them or prevent their spread. For our report, to the extent possible, agencies identified those activities that corresponded to the rapid response definition that we provided. In addition, agencies did not routinely track funding for these activities. The officials, however, believe that their estimates are a fairly accurate representation of their rapid responses. Some agencies could not provide estimates of their rapid response funding. For example, Defense officials said that while the Department probably does minimal rapid response, it does not track these responses and could not estimate the associated funding. The National Park Service; the Fish and Wildlife Service division that manages National Wildlife Refuges; and USDA’s Cooperative State Research, Education, and Extension Service also said they perform or support some rapid response. While these agencies could not estimate their rapid response funding, officials generally stated that it was minimal. Thus, while agency estimates may be somewhat over- or understated, any unreported amounts should not significantly affect the relative magnitude of funding described in this report. (See app. 1 for further discussion of agencies’ funding estimates.) Officials from USDA, Interior, Commerce, and Defense have reported that many rapid response needs have not been and are not being adequately met. Many unmet needs stem from inadequate resources or attention to the problem. In other instances, rapid response may not have occurred because the infestation was not detected early on, technologies were not available to combat the invasive species, or there was insufficient understanding about the risk of the threat. The following examples demonstrate some of these unmet rapid response needs: According to Park Service officials, rapid response to invasive weeds in many national parks is inadequate. The Service has 4 invasive plant teams that, among other things, conduct rapid response in 38 parks. However, over 150 additional parks with serious weed infestations have requested coverage by invasive plant teams. A Fish and Wildlife Service official said there is minimal rapid response on its over 500 national wildlife refuges, although invasive species are estimated to affect over a third of the refuge lands in the continental United States. Moreover, a recent National Audubon Society study assessed 10 wildlife refuges, described as “in crisis,” and found that invasive species were damaging biological values in 4 of them. The Service estimates that over $120 million a year is needed to combat invasive species on wildlife refuges. A USDA inventory of the nations’ private rangelands concluded that at least 69 million acres (about 17 percent) were adversely affected by invasive plants, including unwanted brush. APHIS’ fiscal year 2001 budget request for $8.8 million for an invasive species program to protect agricultural and nonagricultural resources was not funded. The agency also requested a $1.7 million increase (from $424,000 to $2.1 million) for a noxious weed program that was viewed as an initial step toward a national rapid response system for invasive plants. The program received an increase of about $700,000. When newly detected invasive species are not addressed in time, the results can be greater federal and state expenditures to control the infestation. In agriculture, invasive species, such as the Mediterranean fruit fly and citrus canker, are significant pests in terms of control costs. Examples of costly control programs for invasive species that affect natural areas also abound. Commonly cited programs include those aimed at reducing populations of leafy spurge, sea lampreys, hydrilla, zebra mussels, purple loosestrife, and brown tree snakes. The response to the ruffe, a perch-like Eurasian fish, illustrates the difficulties in mounting rapid response efforts and the economic consequences of not doing so. The ruffe invaded North America in the 1980s through ballast water and soon colonized bays and tributaries along parts of Lake Superior. A rapid response among federal, state, Canadian, and other entities to contain the ruffe foundered because of a dispute over whether to use chemical controls. Although subsequent control efforts have slowed the ruffe’s spread, it is expected to reach the warmer waters of the lower Great Lakes fisheries where its economic consequences may be devastating. For example, the Ohio Great Lakes fishery alone is worth about $600 million a year. Several federal land managers considered the lack of adequate funding and resources to manage noxious weeds on federal agencies’ land as shortsighted, a “penny wise, pound foolish” approach. Although 90 percent of the 350 million acres of federal western land are not yet significantly infested, invasive weeds increase on average about 14 percent a year. When a rangeland infestation becomes severe, the costs of weed control often exceed the land’s market value. In 1991, for instance, a 3,200 acre ranch in North Dakota sold at 60 percent below market value because it was infested with leafy spurge. Even when land values deteriorate, weed control is still needed to keep weeds from spreading to nearby areas. The need to deal with invasive species was succinctly summarized by a Bureau of Land Management official, who said “you can pay now or later, but you will eventually pay sometime.” A major obstacle to rapid response is that there is no national system that addresses all types of invasive species infestations—those affecting aquatic areas, rangelands, and forests as well as crops and livestock. Without such a system, problems that have hampered past rapid response efforts are less likely to be resolved. Further, a national system would help assure that invasive species that affect natural areas receive a level of attention commensurate with their risks. APHIS is the only federal agency with a systematic rapid response process. However, its coverage has primarily been limited to pests affecting crops and livestock. Other agencies with responsibilities for natural areas, such as those in Interior, face competing demands for their resources and often respond to infestations in an ad hoc manner. The United States lacks a comprehensive national system for rapidly responding to newly detected invasive species. Among other things, such a system could provide (1) integrated planning to encourage partnerships, coordinate funding, and develop response priorities; (2) technical assistance and other resources; and (3) guidance on effective response measures. Without a national system, recurring problems are less likely to be uniformly addressed. Several problems that we identified—the need for more detection systems; better mechanisms for developing federal, state, and local government partnerships; and improved technologies to eradicate and contain invasive species—are described below. Rapid response has been significantly hindered by the lack of early detection systems to identify infestations when they are small and most easily addressed. Without early detection, years may pass before an invasive species is discovered or recognized as harmful. Detection of new infestations falls short in several areas. First, surveillance and monitoring for new invasive species are inadequate. Visual surveys, traps, physical inspection, and water sampling can locate infestations so that they can be mapped and responded to. However, many species are not easily detected because they are microscopic, aquatic, or difficult to recognize as new or invasive. Surveillance is particularly important near high-risk areas (e.g., major shipping ports, airports, and warehouses) where species are most likely to be introduced. For example, some USDA officials believe that the Asian long-horned beetle was in the United States up to 10 years before it was discovered in New York in 1996. As of May 2001, this infestation (the first of five in New York) has resulted in the destruction of over 2,500 trees. Late detection and insufficient surveying of early infestations have made eradication efforts more difficult. Whether the beetle can be totally eradicated is still uncertain. The Caulerpa taxifolia, or “killer algae” infestation near San Diego, is another example of a belated detection. Experts believe that this aggressive aquatic plant was likely introduced about 4 years before it was officially reported in June 2000. It is expected to have a devastating economic impact on California coastal communities and significant ecological consequences if it becomes permanently established and spreads. Surveillance efforts for new infestations vary among agencies, with APHIS having the most extensive federal system. APHIS systematically monitors for several agricultural pests, including gypsy moths, fruit flies, and cotton boll weevils. In other agencies, surveillance is more limited. For example, Park Service officials said that their four invasive plant teams systematically survey for invasive plants; however, the teams cover only about one-tenth of the parks. Officials from the Fish and Wildlife Service and Bureau of Land Management said they periodically surveyed only a small percentage of their lands for new infestations. (In commenting on our draft report, the Bureau noted that it has an inventory program that monitors and detects weed infestations.) An official from Commerce’s National Oceanic and Atmospheric Administration said that few marine or estuarine areas have baseline monitoring data. Second, increased knowledge is needed about the biology of invasive species to detect and identify new species and assess their potential threats. For example, information on insects’ lifecycles can help detect pests at various stages of their development. Similarly, risk assessments of potentially invasive species are needed to prioritize response actions and develop contingency plans. Agencies need to know, for example, whether the species was invasive in other areas, what conditions (e.g., native range, rate of population growth, ability to disperse within a new area) are conducive to its invasiveness, and whether it is a threat to native species. APHIS is working with several scientific organizations (e.g., the Weed Science Society of America) to develop a list of the most potentially serious invasive plant pests for use in targeting detection efforts and developing contingency plans. Since invasive species ignore boundaries, rapid response often involves coordination among multiple government agencies. The complex interplay among federal, state, and local agencies adds to the potential for inefficiencies in these efforts. In the past, issues concerning leadership, funding, and other organizational responsibilities have hampered such efforts. The discovery of giant salvinia in the Lower Colorado River in 1999 illustrates some of the pitfalls of rapid response involving multiple jurisdictions. The infestation, found on a river bordering Arizona and California, affected state, tribal, private, and federally managed land. Interior agencies—the Fish and Wildlife Service, Bureau of Reclamation, and Bureau of Land Management—Arizona and California state agencies, local water districts, and other affected parties quickly formed a task force to coordinate action. According to an Interior representative, the goal of rapid response evaporated in the face of funding obstacles and disagreements over who should be the lead agency and appropriate control strategies. Had immediate action been taken, eradication of this infestation would have been possible, according to a science advisory panel and California officials. Disagreements over funding reportedly contributed to delays in responding to the Asian long-horned beetle. Although the beetle was first reported in New York in August 1996, the removal of the first several hundred infested trees was not completed until June 1997, nearly a year later. New York State officials said that their response was delayed because the federal and state officials initially involved in the effort lacked the authority to make funding commitments. Additional delays occurred because of state and local concerns regarding the sufficiency of federal funding available for tree removal and restoration costs. On the other hand, officials cited several response efforts that exemplified effective partnerships, one being the response to Caulerpa taxifolia. Federal and state participants said the response was effective largely because of the (1) early involvement of a public-private action team that recognized the urgent need for rapid response and (2) active involvement of several key players, including the consulting firm that discovered and treated the infestation, the regional water quality control board, and the state agriculture department. The regional water board was instrumental in obtaining state emergency cleanup and abatement funding, enabling eradication efforts to quickly begin. Surveying began a day after the infestation was identified; within 2 weeks, an action team was formed and initiated response measures. Initial treatment was completed in 3 months. Periodic monitoring and treatment are ongoing, but it will take years to know whether complete eradication can be achieved. Executive Order 13112 emphasizes the need for federal agencies to cooperate with states. Many state officials are concerned about what role they will play in a national rapid response system and have differing views on what their roles should be. For example, in commenting on recommendations in the draft invasive species management plan, some states emphasized the need to respect the sovereignty of state, local, and tribal authorities, particularly in managing fish and wildlife within their borders. Others emphasized the importance of a strong national effort to address invasive species given their limited ability to address interstate problems. The rapid response capabilities of states also vary. For example, a 1993 Office of Technology Assessment study reported that most state agencies rated their invasive species implementation and enforcement resources as “less” or “much less” than adequate. Finally, some states, such as Minnesota and Hawaii, have substantial legal structures in place, while others have barely addressed the issue. To develop partnerships in areas relating to agriculture, APHIS has established memorandums of understanding with state departments of agriculture in all 50 states. These agreements define, among other things, federal and state rapid response duties. An effective rapid response to invasive species requires having sufficient information on and access to environmentally sound, cost-effective control methods. Many responses fail or are only partially successful because they lack information on how best to control the species or because control methods are unavailable or politically infeasible to use. Agencies’ inability to fund accelerated research on emerging threats has limited the availability of effective control methods. For example, according to Forest Service scientists, research to develop control methods and basic knowledge about sudden oak death, a new destructive invasive forest disease in California, was delayed by the time-consuming process used to obtain funding. The scientists noted that although $3.5 million was needed to do the research, it took 7 months, from late June 2000 until late January 2001, for the Forest Service to obtain about one-third ($1.1 million) of the requested amount from Commodity Credit Corporation (CCC). Consequently, the Forest Service was unable to develop basic knowledge about this little known disease as quickly as it would have had the research been fully funded immediately. Furthermore, the Service estimated that it needed an additional $875,000 in fiscal year 2001 for immediate research and development in connection with other emerging invasive threats, such as the exotic spruce aphid which has caused severe damage to forests in the Southwest. Likewise, a Geological Survey scientist said that his agency does little rapid research relating to newly detected species because funding is not readily available. He said that research managers must often seek resources from other agencies if they want to initiate research and surveys to support rapid response. However, according to this scientist, whether the funding comes from within the Survey or without, the amount of time spent in obtaining it frequently makes rapid response infeasible. For certain invasive species, particularly those affecting aquatic areas, environmentally sound control methods are not available. According to a Commerce official, control methods in aquatic areas are much less developed than those in terrestrial settings because (1) awareness of the need for aquatic control methods is relatively recent and (2) industry has little incentive to develop control methods for aquatic areas. Unlike controls used in terrestrial settings, those developed for aquatic areas have few commercial applications; thus, the return on investment tends to be low. This official added that no feasible methods currently exist for controlling some invasive species, such as the spotted jellyfish, which was detected in the Gulf of Mexico in 2000. In other instances, effective chemical pesticides may be available, but have not been registered under the Federal Insecticide, Fungicide, and Rodenticide Act for use in aquatic settings. A number of aquatic species— including the zebra mussel, round goby, and ruffe—continue to spread, in part because of the lack of environmentally sound control methods. Moreover, the number of pesticides available for invasive species control is declining. The Environmental Protection Agency has ruled that methyl bromide—the major fumigant option used in food and fiber quarantine pest treatments—is scheduled to be phased out by 2005. Reassessment of important pesticides, including malathion and guthion, may result in these being phased out as well. Finally, control methods are sometimes too costly. For example, in assessing controls to prevent the Asian swamp eel from moving into the Everglades National Park, an interagency task force considered installing an electrical barrier. Although this was regarded as the most effective control method available, it was rejected due to its high cost. Instead the task force chose to test physical removal, which cost less but, according to some task force members, is likely to be less effective. A federal agency is more likely to respond rapidly to infestations if eradication or containment of invasive species is central to the agency’s core mission. An activity that is central to an agency’s mission is more likely to have ready access to resources than one that must compete with other important activities. While safeguarding agriculture from invasive pests is a primary mission of APHIS, safeguarding natural areas from invasive species is not specified in other agencies’ missions and competes with other important activities for scarce resources. For the most part, responses to such infestations (if they are responded to at all) occur on an ad hoc basis. APHIS’ mission statement specifically identifies safeguarding agriculture from invasive pests; it has clear responsibilities and authorities to rapidly respond to infestations viewed as significant threats to that sector. APHIS’ activities in this area have strong constituency backing and receive the majority of rapid response funding. APHIS has authority to take various steps to deal with an emerging invasion. It has the authority to seize, quarantine, treat, and/or dispose of plants and animals and their products to prevent the importation or interstate movement of plant and animal diseases, pests, and noxious weeds that are new to or not known to be widely prevalent or distributed within and throughout the United States. In the event of a severe disease or pest outbreak which threatens U.S. agricultural production, the Secretary of Agriculture can declare an emergency that, among other things, allows the Secretary to transfer CCC funds to APHIS to pay for eradication activities and to indemnify producers. USDA can also declare, under certain circumstances, an “extraordinary emergency,” triggering intrastate authority to address situations in which measures being taken by a state are inadequate to eradicate a plant pest or noxious weeds. In conjunction with its core mission of safeguarding agriculture from invasive species, APHIS has implemented a systematic process for responding to newly detected plant pests. Its rapid response system includes guidance and procedures, a process for evaluating the risks posed by new plant pests, the ability to take some initial actions within 72 hours, and access to resources and funds for emergency response. Its New Pest Advisory Group, which includes experts within and outside of APHIS, is responsible for evaluating new or reintroduced plant pests and recommending response actions to a Deputy Administrator. To date, APHIS is the only federal agency to implement such a systematic rapid response process. USDA’s response to karnal bunt illustrates its ability to react quickly to invasive species. On March 7, 1996, ARS scientists confirmed that the spores on a wheat sample from Arizona were karnal bunt, a fungal disease of wheat first reported in India. Within 4 days, APHIS officials activated a rapid response team to begin quarantine and survey work. On March 21, 1996, the Secretary of Agriculture announced that he had signed a Declaration of Extraordinary Emergency, which allowed USDA to take a wide range of actions to control and eradicate the fungus, including compensating farmers for losses and imposing quarantines in Arizona and several counties in New Mexico and Texas. While the Plant Protection Act of 2000 expanded APHIS’ authority to address invasive species that threaten natural resources and the environment, APHIS has done relatively little in this area. APHIS has recently revised its mission statement to specifically identify safeguarding natural areas from invasive species; however, APHIS officials said that the agency has been reluctant to rapidly respond to natural area infestations, in large part because it lacks the funding to do so. They noted that the Congress has not responded favorably to APHIS’ requests for additional funds to expand its traditional mission. Some USDA and Interior officials said that in the absence of strong constituency or industry backing, there has been little impetus for the Congress to support an expanded USDA role. Invasive species that threaten natural areas are generally not subject to processes equivalent to those applicable for agricultural pests. An important reason for this is that while Interior and the Forest Service and, to a lesser extent, entities such as Commerce and Defense have responsibilities for protecting the environment, invasive species are a small part of the activities conducted under their missions. As a result, competing priorities and other factors have limited their ability to respond to natural area infestations. The Department of the Interior’s management of invasive species is limited by several factors that are detailed below: If an invasive species affects Interior lands, Interior can use its land management authorities to address the situation as quickly as funding and staffing allow. There are, however, many other environmental issues that compete for Interior’s resources, so there is little assurance they will be available for responding to invasive species. Unlike USDA, Interior lacks access to another funding source for rapid response. Also, unlike APHIS, Interior agencies rarely receive appropriations from the Congress directing them to address specific infestations. Therefore, Interior’s invasive species programs tend to focus on control and restoration rather than rapid response. For example, a National Wildlife Refuge official noted that invasive species funding on refuge lands is used for projects identified in previous annual budget cycles. As a result, funds are directed toward recurring or well- established problems rather than toward rapid response. Although Interior has authority to conduct control and eradication programs on its lands, its authorities are not nearly as specific as APHIS’ invasive species authorities—even in natural areas. APHIS’ authorities cover movement into the United States and interstate movement of insects, plant pathogens, exotic plants, and aquatic organisms that might threaten natural ecosystems. In contrast, rather than preventing the spread of invasive species overall, many of Interior’s statutes are general land management statutes or protect a particular species or group of species. For example, according to an Interior attorney, the Endangered Species Act may result in actions against invasive species, but they would be a byproduct of protecting listed endangered species. Competing priorities have also limited other agencies’ abilities to obtain the resources needed to rapidly respond. For example, the Forest Service has authority and responsibility for promoting environmental protection of forests and rangelands, including protection against invasive species. However, this particular environmental objective must compete with others for funding, including programs aimed at improving and protecting water quality and quantity and reducing fire hazards near urban areas. Moreover, the Service has additional priorities relating to the human use of these natural resources, such as improving the capability of forests and rangelands to provide products (water, timber, and minerals) and services (recreational opportunities) and improving Service roads and facilities. According to Forest Service officials, a lack of resources for accelerated research, management, and technical assistance has impeded their efforts to be more actively involved in rapid responses. At the same time, they emphasized that the agency works actively with APHIS and other partners to perform risk assessments and surveys critical to eradication and control of invasive species in national forests and in partnership on other lands. In commenting on a draft of our report, the Forest Service said that when given adequate resources, it has successfully implemented rapid response actions in full cooperation with its partners. A Defense official said that Defense’s response to invasive species has been minimal because it does not consider the activity to be directly related to its mission. Although Defense is responsible for managing invasive species on military installations, the manager acknowledged that some invasive species are not being addressed. With many competing funding priorities, only the most invasive plants have become rapid response priorities. The U.S. Army Corps of Engineers also has invasive species responsibilities; it helps manage and remove aquatic nuisance species. For example, the Corps is authorized to implement cost−sharing arrangements with state and local governments for managing nuisance aquatic plants in waterways not under the control of the Corps or other federal agencies. A Corps official said that the lengthy planning studies required for these grants virtually preclude assisting states with rapid response, and this program has not been funded since 1996. Commerce—through its National Oceanic and Atmospheric Administration—has, as a peripheral part of its mission, responsibility for managing aquatic invasive species. However, according to a Commerce official, only a few of its activities involve rapid response. For example, Commerce resources helped support the rapid response effort to eradicate Caulerpa taxifolia. Since invasive species that threaten natural areas are not central to any agency’s mission, they are more likely to fall through the cracks. A good example of this is giant salvinia, widely regarded as one of the most devastating aquatic weeds in the world. Although APHIS listed giant salvinia as a Federal Noxious Weed in 1983, this aquatic nuisance continues to be sold at commercial nurseries, even in states where its sale is prohibited. Giant salvinia was first reported in the United States outside of cultivation in South Carolina in 1995. According to a retired APHIS official, APHIS was asked to fund this eradication effort but declined. South Carolina’s Department of Natural Resources cobbled together sufficient funding to eradicate this infestation. A similar response was absent in Texas, however, where the plant was discovered in 1998. As of March 2001, it has been confirmed in 4 public reservoirs, 7 rivers or streams, 6 river basins, and 27 private lakes in that state. In addition, giant salvinia now occurs in water bodies in Arizona, California, Louisiana, Mississippi, Alabama, North Carolina, Georgia, Florida, and Hawaii. The Invasive Species Council’s management plan, issued in January 2001, provides a broad plan of action with 57 recommendations covering 9 key areas of invasive species management. Three of the recommendations specifically address rapid response; a number of others address related areas including early detection. In general, the plan’s rapid response recommendations call for developing a coordinated rapid response program; developing draft legislation for rapid response, with the possibility of permanent funding; and expanding regional networks of invasive species databases. At the same time, the Council acknowledges that many of the recommendations lack specificity and will require further development before they can be implemented. (See app. III for details on the rapid response recommendations and the Council’s actions and planned actions to address them.) Taken in their entirety, the plan’s recommendations would appear to address the obstacles to rapid response described in our report. These include, first and foremost, the need for a national rapid response system to provide guidance, technical assistance and other resources, and integrated planning. Other obstacles that we identified in this report include the need for (1) additional detection systems; (2) improved partnerships among federal, state, and local agencies; and (3) enhanced technologies for eradicating invasive species. Specifically, the Council’s plan calls for: A national system. The plan recognizes the need for a system that would provide, among other things, for rapid response to new invasions. It recommends that by July 2003, the Council develop a program of coordinated rapid response to new invasions of natural and agricultural areas and pursue increases in discretionary efforts to support the program. The Council is to coordinate with other federal, state, local, and tribal agencies in developing the program. According to Council staff, a working group of representatives from the Council’s member agencies will be responsible for implementing this recommendation in cooperation with other stakeholders. The working group is to be established before the end of August 2001. Developing additional early detection systems. The plan has one recommendation aimed at improving the detection and identification of new invasive species. The recommendation contains a series of steps, including (1) compiling a list of taxonomic experts; (2) developing new methods for detecting pathogens and parasites; (3) instituting systematic surveys of high-risk locations; (4) developing a more user- friendly approach to identifying and reporting invasive species; and (5) developing—for use on the Internet—an early detection module that will provide information on invasive plants. Developing stronger partnerships. The plan emphasizes the need to build partnerships with state and local entities, improving coordination, and resolving jurisdictional issues. Moreover, many recommendations incorporate consultations with states and other affected parties as part of the implementation process. For example, regarding rapid response, the plan calls for the Council to develop—in consultation with the states—draft legislation, including the possibility of a permanent funding mechanism and matching grants to states to develop strong partnerships. Other recommendations call for developing (1) clearly defined processes and procedures to help resolve jurisdictional and other disputes regarding invasive species and (2) a national public awareness campaign, emphasizing public and private partnerships. These are only a few examples of the initiatives aimed at developing stronger partnerships. Improving technologies for use in rapid response. The plan calls for developing and testing methods to determine which rapid response measures are most appropriate for specific situations. In addition, the plan recommends (1) preparing a catalog of existing aquatic and terrestrial control methods and proposing strategies to determine their effectiveness in different U.S. habitats; (2) establishing and coordinating a long- and short-term research capacity (ranging from basic to applied research) on invasive species; and (3) as part of a cross-cutting budget proposal for fiscal year 2003, including an initiative to adequately fund federal invasive species research programs. Since the plan is relatively new, implementation of its recommendations is just getting underway. The Council has, however, taken steps to establish priority areas for implementation, rapid response being one of these areas, according to its executive director. Some non-native species arrive in the United States as accidental tourists; others are brought in purposely—for example, to beautify gardens or as fish or game for sportsmen. However, one thing invasive species have in common is that their numbers are increasing dramatically. The explosive growth of invasive species has been accompanied by an increased awareness of the threat they pose and damages they cause. However, heightened awareness has not yet resulted in a systematic national approach to rapid response. As a result, opportunities for eradicating potentially devastating invasive species continue to be lost. Currently, if an invasive species is a serious threat to agricultural crops or livestock there is a good chance that APHIS will address it in some way. APHIS has a process in place for evaluating new invasive species and obtaining resources for responding to serious threats. On the other hand, if an infestation threatens primarily natural areas, the odds of it being rapidly responded to are significantly less. For these infestations, it is sometimes uncertain which, if any, agency will take the lead; ready access to funds is often a problem; and generally no one agency is held accountable if the infestation spreads. At this point, it is unlikely that a single agency, such as Agriculture or Interior, will unilaterally develop a systematic process for evaluating and rapidly responding to invasive species that threaten natural areas. Without specific responsibility for rapidly responding to natural area infestations and resources to implement such a program, agencies have little impetus to take on this responsibility. Thus, we believe that a coordinated approach for dealing with rapid response nationwide offers the best opportunity for ensuring that invasive species of all types will get a level of attention commensurate with their risks. Such a system would bring federal agencies and other stakeholders to the table to address invasive species as a national problem—one that requires integrated planning, resources, and guidance. The Invasive Species Council’s management plan provides a structured framework for dealing with the threat of invasive species nationwide. The plan covers activities on many fronts—from prevention to educational outreach—and will likely take many years to fully implement. As a result, the plan’s recommendations will need to be implemented incrementally. In this regard, we agree with the Council’s decision to treat rapid response as an area requiring priority attention. Rapid response provides an excellent target of opportunity, offering the potential to save millions of dollars in damages and control costs and for preserving natural habitats and native species. We believe that if the recommendations are properly implemented, they will go a long way toward developing a systematic national approach toward rapid response. At the same time, while a concerted effort is clearly needed to slow the onslaught of invasive species, we believe that before drafting rapid response legislation and requesting increases in funding, the Council needs to clarify several fundamental issues. In particular, many agency officials are uncertain as to what types of activities should be considered rapid response and, consequently, how much funding their agencies devote to that activity. In order to make a convincing case for additional legislation or resources, the Council must first define rapid response and obtain a solid understanding of how much federal funding is already being directed toward this activity. Only then will the Council have a sound basis for determining future needs. We recommend that the co-chairs of the Invasive Species Council—the Secretaries of Agriculture, Commerce, and the Interior—direct the Council members to: Develop criteria for what constitutes a rapid response, including examples of activities that fall into that category. Based on the criteria established above, develop information on their Departments’ rapid response funding and the programs and activities that receive funding. In consultation with the Invasive Species Advisory Committee, establish rapid response priorities to help identify resource needs and guide the discretionary actions of agencies in addressing invasive species. We provided a draft copy of this report for review and comment to the Departments of Agriculture, Commerce, and the Interior and to the Invasive Species Council. We met with the Council’s staff and the three departmental liaisons who provided comments from their respective Departments and agencies: Agriculture (Agricultural Research Service, Animal and Plant Health Inspection Service, Forest Service, and Natural Resources Conservation Service); Commerce (National Atmospheric and Oceanic Administration); and Interior (Bureau of Land Management, Bureau of Reclamation, Fish and Wildlife Service, Minerals Management Service, National Park Service, and U.S. Geological Survey). The Departments, agencies, and the Council’s staff generally agreed with the substance of our report and with our recommendations. A major theme running throughout the comments was the impact of inadequate resources on their ability to rapidly respond to new infestations and the need for additional funding to develop an effective rapid response capability. They also provided technical comments that we incorporated throughout our report as appropriate. Appendix IV provides a summary of the major points raised in the comments and our response, as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date on this letter. At that time, we will send copies of this report to interested congressional committees and members; the Executive Director of the National Invasive Species Council; the co-chairs of the National Invasive Species Council (the Secretaries of Agriculture, Commerce, and the Interior); and to the other Council members. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-3814. The key contributors to this report are listed in appendix V. To determine the extent of federal rapid response to new invasive species, we reviewed the activities of the federal agencies responsible for invasive species activities and asked the agencies that conducted rapid response for data on which species they rapidly responded to and the related obligations for fiscal year 2000. The following agencies provided funding estimates for their rapid response efforts: U.S. Department of Agriculture: the Animal and Plant Health Inspection Service, the Agricultural Research Service, and the Forest Service; Department of the Interior: the Bureau of Indian Affairs, the Bureau of Land Management, the Bureau of Reclamation, the Fish and Wildlife Service, and the U.S. Geological Survey; and Department of Commerce: the National Oceanic and Atmospheric Administration. The following agencies did not provide funding estimates on rapid response: Department of Defense; Agriculture’s Cooperative State Research, Education, and Extension Service; APHIS’ Wildlife Services program; and Interior’s National Wildlife Refuge System, Coastal Program, and National Park Service do not track budget information on their rapid response activities and could not estimate funding for these activities. Officials from Transportation, the Environmental Protection Agency, Agriculture’s Natural Resources Conservation Service, Interior’s Minerals Management Service, and Defense’s Army Corps of Engineers said that although their respective organizations conducted invasive species activities, they did not perform rapid response in fiscal year 2000. Agencies’ reported obligations may be under- or overstated for several reasons. First, officials said that rather than using a specific fund for rapid response activities, their agencies rely, at least in part, on programmatic and contingency funds that fund many activities. Agencies do not routinely track the rapid response portion of this funding. While much of APHIS’ funding for rapid response is transferred from CCC, it also relies on programmatic and contingency funds. The basis for agencies’ funding estimates ranged from analyses of funding records to an agency official’s informed opinion. The Bureau of Indian Affairs, Bureau of Reclamation, Forest Service, Agricultural Research Service, National Oceanic and Atmospheric Administration, and U.S. Geological Survey listed the rapid response activities that they funded; the Fish and Wildlife Service provided funding information on its rapid responses to aquatic nuisance species; and the Bureau of Land Management estimated that its rapid response funding was 8 percent of its total invasive species obligations. Further, the agencies were somewhat uncertain as to which activities to include in rapid response. To facilitate consistency among the agencies, we provided a definition of rapid response as being “a response carried out in time to contain or eliminate potentially damaging invasive species—the actual time required for rapid response varies depending on the species.” We also worked with the agencies while they prepared their data to further ensure consistency. We did not verify the accuracy of the agencies’ data. However, we did compare their data with other available data in an effort to identify inconsistencies. We resolved all substantive inconsistencies with agency budget and program officials. To determine the obstacles to rapid response, we interviewed officials and scientists and obtained plans, status reports, budget requests, and other documents from the agencies cited above and from the Department of Transportation, Environmental Protection Agency, Smithsonian Institution, U.S. Army Corps of Engineers, and Invasive Species Council staff. We also interviewed representatives and reviewed documents from two interagency groups: the Aquatic Nuisance Species Task Force and the Federal Interagency Committee for Management of Noxious and Exotic Weeds. In addition, we obtained views on obstacles from representatives of state agricultural or natural resource agencies in California, Florida, Hawaii, Minnesota, and Texas and with nonprofit organizations involved with invasive species efforts, including the American Lands Alliance, Nature Conservancy, and Charles Valentine Riley Memorial Foundation. We selected the states cited above because agency officials stated that they have significant invasive species problems and/or strong and innovative invasive species programs. In addition, we analyzed studies, reports, the National Invasive Species Management Plan and public comments on the plan, and other documents describing invasive species response systems, problems, and obstacles to more timely rapid response. To review the actions of federal agencies in greater detail, we analyzed four invasive species threats—the Asian long- horned beetle, Asian swamp eel, Caulerpa taxifolia, and giant salvinia. Agency officials identified these invasive species as being serious threats and relatively recent introductions into the United States. Furthermore, these infestations have received varying levels of rapid response from federal agencies. To determine how federal agencies can improve rapid response, we interviewed officials from the entities cited above to obtain their views on solutions to obstacles impeding rapid response. In addition, we interviewed invasive species experts at several universities. We analyzed and synthesized recommendations obtained in interviews and from reports, plans, documents, and literature relating to rapid response. We also reviewed invasive species legislation and Executive Order 13112 and analyzed the rapid response recommendations in the National Invasive Species Management Plan. We performed our work from October 2000 through May 2001, in accordance with generally accepted government auditing standards. The federal Departments that provided estimates of their rapid response obligations for fiscal year 2000—Agriculture, Interior, and Commerce— also provided information on the invasive species that they rapidly responded to in that period. For Agriculture’s APHIS and ARS and Interior’s Fish and Wildlife Service and Bureau of Land Management, the invasive species listed are ordered by the amount obligated, from largest to smallest. The information provided by Agriculture’s Forest Service and Interior’s Bureau of Indian Affairs and U.S. Geological Survey did not allow for such ordering. Animal and Plant Health Inspection Service: Citrus bacterial canker, glassy-winged sharpshooter/Pierce’s disease, Mediterranean fruit fly, Asian long-horned beetle, plum pox virus, West Nile virus, transmissible spongiform encephalopathy in sheep, olive fruit fly, Asian gypsy moth, giant salvinia, pink hibiscus mealybug, federally listed noxious weeds, rabbit calcivirus disease, screwworm. Agricultural Research Service: Glassy-winged sharpshooter/Pierce’s disease, brown citrus aphid, citrus psylla, papaya mealybug, pink hibiscus mealybug, Asian long-horned beetle, plum pox virus, karnal bunt, sorghum ergot, tropical soda apple, giant salvinia, West Nile virus, Caulerpa taxifolia, yellow unicorn plant, elongate mustard, blissid cinchbug, waterlettuce. Forest Service: European gypsy moth, Asian long-horned beetle, hemlock woolly adelgid, Port-Orford-cedar disease, Asian gypsy moth, pine shoot beetle, sudden oak death, pink hibiscus mealybug, giant salvinia, yellow starthistle, purple loosestrife, Dyers woad, leafy spurge, spotted knapweed, Canada thistle, orange hawkweed, Dalmatian toadflax, rush skeletonweed, whitetop, Miconia, banana poka, cheatgrass, Scotch broom. Fish and Wildlife Service (aquatic species): Caulerpa taxifolia, Asian swamp eel, zebra mussel, brown tree snake, round goby, New Zealand mud snail, ruffe. Bureau of Indian Affairs: Cogongrass, purple loosestrife, Russian knapweed, musk thistle. Bureau of Land Management: Giant salvinia, yellow starthistle, purple loosestrife, Dyers woad, squarrose knapweed, salt cedar, leafy spurge, spotted knapweed, Canada thistle, Scotch thistle, and others. U.S. Geological Survey: Asian swamp eel; giant salvinia; garlic mustard; round goby; black, silver, and bighead carp; green mussel; zebra mussel; other aquatic invasive species. Bureau of Reclamation: Giant salvinia. National Oceanic and Atmospheric Administration: Caulerpa taxifolia. The Invasive Species Council’s management plan contains three recommendations that specifically address rapid response. The recommendations and the Council’s stated and planned actions to address them are as follows: 1. Starting in January 2001, Interior (especially U.S. Geological Survey/Biological Resources Division) and USDA, in cooperation with the National Science Foundation and Smithsonian Institution, will expand regional networks of invasive species databases (e.g., the Inter- American Biodiversity Information Network) and produce associated database products, to cooperate with the Global Invasive Species Programme and other partners to establish a global invasive species surveillance and rapid response system. Actions Taken to Address Recommendation: Interior’s U.S. Geological Survey received a grant in September 2000 from the U.S. Department of State to (1) provide technical assistance in implementing the Inter-American Biodiversity Information Network and (2) convene a meeting in conjunction with the Global Invasive Species Programme and provide seed funding for regional hubs in Mexico and South Africa. The meeting, a workshop on developing regional invasive species information hubs, was held in February 2001. It brought together scientists from Africa, North America, and international organizations who are working on ways to facilitate invasive species efforts by strengthening taxonomic services and/or information networks. 2. By July 2003, the Council, in coordination with other federal, state, local, and tribal agencies, will develop a program for coordinated rapid response to incipient invasions of both natural and agricultural areas and pursue increases in discretionary spending to support this program. Actions Planned to Address Recommendation: Establish interagency invasive species "rapid response" teams that include management and scientific expertise. Teams will focus on taxonomic, ecosystem, and regional priorities, and coordinate with local and state governmental and non-governmental efforts, including standing and ad hoc state invasive species councils. Develop and test methods to determine which rapid response measures are most appropriate for a situation. Review and propose revisions of policies and procedures (i.e., advance approval for quarantine actions, pesticide applications, and other specific control techniques, and interagency agreements that address jurisdictional and budget issues) concerning compliance with federal (e.g., Clean Water Act, National Environmental Policy Act, Endangered Species Act) and non-federal laws that apply to invasive species response actions. The proposed revisions will be made available for public comment and will take into account local and state requirements. Prepare a guide to assist rapid response teams and others that will incorporate the methodology developed for response measures and guidance on (1) regulatory compliance and (2) jurisdictional and budget issues. 3. Within fiscal year 2003 budget development, the Council, in consultation with the states, will develop and recommend to the President draft legislation for rapid responses to incipient invasions, including the possibility of permanent funding for rapid response efforts as well as matching grants to states in order to encourage partnerships. The recommended legislation will augment existing rapid response mechanisms. Action Taken to Address Recommendation: The Council is seeking recommendations from its member agencies for nominees to a working group that will draft legislation. The following summarizes the key points raised in the comments provided on a draft of our report by the Departments, agencies, and Council staff and our response, as appropriate. Agriculture’s APHIS agreed with the need to develop criteria for what constitutes a rapid response. The Forest Service noted that (1) it has full authority to respond to invasive species on national forests and in partnership on other lands and that its response has been limited by inadequate resources, not by lack of authority, as suggested in our report; (2) our report does not discuss the impediments to rapid response resulting from compliance with the National Environmental Protection Act; and (3) regarding the statement in our conclusions that “a national rapid response system offers the best opportunity for ensuring that invasive species ...lgets a level of attention commensurate with their risks,” Executive Order 13112 and the National Invasive Species Management Plan endorse building on existing strengths, not creating new structures, to enhance coordination and program response to invasive species. First, we agree that the Forest Service has the authority to rapidly respond to invasive species under the conditions it described. However, having authority and having resources to carry out that authority are not the same thing. In particular, we believe that the ability to obtain resources for rapid response is related to the centrality of invasive species to an agency’s mission. Invasive species is one of many important Forest Service responsibilities; however, it is not specifically identified in the Forest Service’s mission as it is for APHIS. Second, regarding the National Environmental Protection Act, agency officials that we interviewed during our review had differing views on the extent to which compliance with the act hindered rapid response, with some believing that adequate planning could minimize the impediments and others maintaining that the act was a major hindrance. While we agree that compliance with the act may slow rapid response in some circumstances, we believe that any impediments it creates are not of the magnitude of those described in our report. Finally, we agree with the Forest Service that a national system for rapid response should be built on existing strengths and we do not mean to imply otherwise. In fact, our conclusions note that the Council’s plan provides a structured framework for dealing with the treatment of invasive species nationwide and that if its recommendations are properly implemented, they will go a long way toward developing a systematic national approach toward rapid response. Commerce said that the report was well written and accurate in its discussion of the difficulties in rapidly responding to invasive species. Commerce also commented on the problems posed by resource shortages. It noted that rapid response needs in aquatic ecosystems are unpredictable; in some years there may be no need to mount a rapid response effort and in others, several seriously invasive species may be introduced. Given this variability, most of Commerce’s invasive species funding is directed toward preventing and controlling invasive species that have been identified in advance. Commerce further noted that a rapid response to a new, potentially serious, infestation may require large amounts of money and extensive reprogramming of funds committed to other priority areas. Interior said the report was well written and generally precise in its observation of Interior’s program efforts to support rapid response. Interior also said that the report will focus congressional attention on the opportunity to clarify authorities (particularly in interjurisdictional response efforts) and consider multi-year emergency response funding for such harmful, unpredictable invasions. Interior also noted that (1) the shortage of resources in the land and water management activities of the bureaus continue to be exacerbated by broadening mission goals; (2) there is an increasing need for technological improvements to enhance monitoring and rapid assessment of priorities for action; (3) planning processes have not yet been fully integrated with state and local stakeholders into regional or statewide rapid response contingency plans; and (4) an important aspect of assessing the true risk and cost of invasive species on natural areas is the ability to assess economic value for wildlife habitat and recreational losses resulting from plant infestations. This is an area that lags well behind agronomic assessment. The Invasive Species Council staff said that the report covered a high- priority area for the Council. They further noted that the key issue concerning rapid response is readiness and that a consistent and universal agency complaint is that even when an infestation is detected early, the lack of coordination and a contingency fund or funds-transfer mechanism were major obstacles to quick action. They added that our report’s recommendations did not reflect the need for a flexible contingency funding mechanism. Regarding our first recommendation (developing criteria for what constitutes a rapid response), the Council staff agreed that the definitions for rapid response vary even among the Departments surveyed in our review. They also said that while this recommendation can be done relatively quickly, it should not be the primary focus for the Council action put forth in our report. Regarding the second recommendation (developing information on Departments’ rapid response funding and the programs that are receiving funding), the Council staff said that work on this effort is already underway. Finally, they suggested that we recommend that the Council fully implement the National Invasive Species Management Plan’s recommendations regarding early detection and rapid response. We appreciate the need that the Council staff and many agencies expressed for additional funding and a flexible funding mechanism to rapidly address new invasions. Our report documents some of the consequences of the lack of resources in addressing some invasive species. At the same time, we believe that the funding issue is ultimately a policy concern that is best addressed by congressional decisionmakers in their deliberations on national spending priorities. Thus, we are not making a recommendation or endorsing recommendations in the Invasive Species Management Plan regarding the adequacy of rapid response funding or the need for a flexible funding mechanism. Finally, we continue to believe that before the Council requests additional legislation or resources it must first develop criteria for what constitutes a rapid response. The considerable confusion regarding this term makes it critical that Council members reach consensus on what a rapid response is before they undertake activities to strengthen it. In addition to those named above, Gary Brown, Jacqueline Cook, Judith Kordahl, Beverly Peterson, and Amy Webbink made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Invasive species--harmful, nonnative plants, animals, and microorganisms--are widespread throughout the United States, causing billions of dollars of damage annually to crops, rangelands, and waterways. An important part of pest control is quick action to eradicate or contain a potentially damaging invasive species. Federal rapid response to invasive species varies: species that threaten agricultural crops or livestock are far more likely to elicit a rapid response than those primarily affecting forestry or other natural areas, including rangelands and water areas. A major obstacle to rapid response is the lack of a national system to address invasive species. Other obstacles to rapid response include the need for additional detection systems to identify new species; improved partnerships among federal, state, and local agencies; and better technologies to eradicate invasive species. The Invasive Species Council's Management Plan makes several recommendations for improving rapid response, including developing a program of coordinated rapid response and pursuing increases in discretionary spending to support the program. A concerted effort to improve the rapid response is clearly needed. If properly implemented, the Council's recommendations will go a long way toward developing a national system to address this pressing need. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.